question
stringlengths
15
150
question_id
int64
258k
2.12M
score
int64
1
208
link
stringlengths
60
131
body
stringlengths
101
9.72k
answers
listlengths
1
288
verbolised
stringlengths
650
683k
Is there a better notation for integrating twice?
2,084,652
12
https://math.stackexchange.com/questions/2084652/is-there-a-better-notation-for-integrating-twice
<p>I'm studying kinematics with varying acceleration. To calculate the displacement from the acceleration you have to integrate the acceleration with respect to t, then integrate that with respect to t, this time with limits.</p> <p>I've been writing this:</p> <p><a href="https://i.stack.imgur.com/Jzrr7.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Jzrr7.jpg" alt=""></a></p> <p>But it looks a little messy. Is there a better way?</p> <p>The notation on <a href="http://tutorial.math.lamar.edu/Classes/CalcIII/IteratedIntegrals.aspx#MultInt_Iterated_Ex1a" rel="noreferrer">this webpage </a> is good but seems to be aimed at having a) limits on both integrals (for me the inner integral is indefinite) and b) different variables - in differentiating with respect to t both times.</p>
[ { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" }, { "answer_id": 2084665, "score": 18, "text": "<p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p>\n\n<p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \\int_0^ta(s)ds$, where $t$ is the time. So what you want is $\\int_2^5\\int_0^ts^2dsdt$.</p>\n\n<p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p>\n" }, { "answer_id": 2084669, "score": 5, "text": "<p>This does not quite make sense to me, due to \n$$ \\int t^2 \\,dt = \\frac 13 t^3 + C $$\nYour integral just gives \n$$ \\int_2^5 \\int_t \\tau^2 \\, d\\tau\\, dt = \\frac 1{12} (5^4 - 2^4) + 3C $$\nfor some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\\pi$, just different and equally justified choices of $C$. </p>\n\n<p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a \"look-alike\" variable name, for example the corresponding greek letter. So, a possibility is to write\n $$ \\int_2^5 \\left(u + \\int_0^t \\tau^2 \\,d\\tau\\right) \\, dt $$\nwhere $u$ is the initial velocity.\nAnd if you want to stick to indefinite limits, I'd write\n$$ \\int_2^5 \\int_t \\tau^2 \\,d\\tau \\, dt $$</p>\n" }, { "answer_id": 2084667, "score": 1, "text": "<p>You want to write $F(t) = \\int t^{2}\\text{d}t$. This is equal to:</p>\n\n<p>$F(t) = F(0) + \\int_{0}^{t}s^{2}\\text{d}s$ provided the expression is defined at $t=0$.</p>\n\n<p>Then, you could write:</p>\n\n<p>$$x = \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t$$</p>\n\n<p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p>\n\n<p>\\begin{align*}\nx &amp;= \\int_{2}^{5}\\left(F(0)+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0}+\\int_{0}^{t}s^{2}\\text{d}s\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\left[\\frac{s^{3}}{3}\\right]_{0}^{t}\\right)\\text{d}t\\\\\n&amp;= \\int_{2}^{5}\\left(v_{0} + \\frac{t^{3}}{3}\\right)\\text{d}t\\\\\n&amp;= \\left[v_{0}t + \\frac{t^4}{12}\\right]_{2}^{5}\\\\\n&amp;= v_{0}(5-2) + \\frac{5^4}{12} - \\frac{2^4}{12}\\\\\n&amp;= 3 v_{0} + \\frac{609}{12}\n\\end{align*}</p>\n\n<p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p>\n" }, { "answer_id": 2084668, "score": 1, "text": "<p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then\n$$\\int_a^b\\left(\\int f\\right)dx=\\int_a^b (F(x)+C)dx=C(b-a)+\\int_a^bF(x)dx$$\nAre you sure that this is what you want?</p>\n" }, { "answer_id": 2084938, "score": 1, "text": "<p>You should have $\\int \\int a\\; dt \\; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\\tau$ and $\\tau'$ as dummy variables and we would get \n$$\\int_{t_0}^t \\int_{t_0}^{\\tau'} \\tau^2\\; d\\tau \\; d\\tau'=\\int_{t_0}^t\\left.\\left(\\frac 13\\tau^3\\right)\\right|_{t_0}^{\\tau'}\\;d\\tau'\\\\\n=\\int_{t_0}^t\\frac 13(\\tau'^3-t_0^3)\\; d\\tau'\\\\\n=\\frac 1{12}\\tau'^4|_{t_0}^t-t_0(t-t_0)\\\\\n=\\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p>\n" }, { "answer_id": 2086684, "score": 0, "text": "<p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p>\n\n<p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p>\n\n<p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\\int\\int q t^2 \\,dt\\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p>\n\n<p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time:\n$$s(T) = \\int\\limits_{t=T_0}^T v(t)\\,dt $$\nand a final position is an initial position $x_0=x(T_0)$ plus the distance:\n$$x(T) = x_0 + s(T)$$</p>\n\n<p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$:\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau$$\nwhere $v_0=v(T_0)$ is the initial velocity.</p>\n\n<p>Now plug $v(t)$ into the first equation:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t a(\\tau)\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Add the acceleration definition and the integration expression is complete:\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau\\right)\\,dt $$</p>\n\n<p>Work it from inside out:\n$$\\int q\\tau^2\\,d\\tau = \\frac 13 q\\tau^3$$\nso\n$$v(t) = v_0 + \\int\\limits_{\\tau=T_0}^t q\\tau^2\\,d\\tau = v_0 + \\left[\\frac 13q\\tau^3\\right]_{\\tau=T_0}^t = v_0 + \\frac q3(t^3-{T_0}^3)$$\nthen\n$$x(T) = x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3(t^3-{T_0}^3)\\right)\\,dt $$\n$$= x_0 + \\int\\limits_{t=T_0}^T \\left(v_0 + \\frac q3 t^3- \\frac q3 {T_0}^3\\right)\\,dt $$\n$$= x_0 + \\left[\\frac q{12} t^4 + \\left(v_0 - \\frac q3 {T_0}^3\\right)t\\right]_{t=T_0}^T $$\n$$= x_0 + \\frac q{12} (T^4 - {T_0}^4) + \\left(v_0 - \\frac q3 {T_0}^3\\right)(T-T_0) $$</p>\n\n<p>Assuming your frame of reference has been chosen so that $x_0=0\\,m$ and $v_0=0\\,\\frac ms$, the expression simplifies to\n$$x(T) = \\frac q{12} (T^4 - {T_0}^4) - \\frac q3 {T_0}^3 (T-T_0) $$\nFor $T=5\\,s$ and $T_0=2\\,s$:\n$$x(5\\,s) = \\frac q{12} ((5\\,s)^4 - (2\\,s)^4) - \\frac q3 (2\\,s)^3 (5\\,s-2\\,s) $$\n$$ = \\frac q{12} (625 - 16)s^4 - \\frac q3 8\\cdot 3\\, s^4 $$\n$$ = \\left(\\frac {625 - 16}{12} - 8\\right)q\\,s^4 $$\n$$ = 42.75\\, q\\,s^4$$\nIf $q = 1\\,\\frac m{s^4}$, as I suppose from your attempt to write the double integral:\n$$ x(5\\,s) = 42.75\\,m$$</p>\n" } ]
Question: Is there a better notation for integrating twice? Answer 1: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 2: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 3: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 4: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 5: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 6: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 7: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 8: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 9: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 10: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 11: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 12: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 13: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 14: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 15: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 16: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 17: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 18: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 19: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 20: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 21: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 22: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 23: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 24: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 25: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 26: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 27: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 28: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 29: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 30: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 31: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 32: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 33: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 34: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 35: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 36: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 37: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 38: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 39: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 40: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 41: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 42: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 43: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 44: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 45: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 46: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 47: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 48: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 49: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 50: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 51: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 52: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 53: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 54: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 55: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 56: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 57: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 58: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 59: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 60: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 61: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 62: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 63: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 64: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 65: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 66: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 67: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 68: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 69: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 70: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 71: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 72: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 73: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 74: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 75: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 76: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 77: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 78: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 79: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 80: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 81: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 82: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 83: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 84: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 85: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 86: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 87: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 88: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 89: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 90: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 91: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 92: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 93: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 94: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 95: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 96: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 97: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 98: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 99: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 100: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 101: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 102: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 103: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 104: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 105: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 106: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 107: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 108: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 109: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 110: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 111: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 112: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 113: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 114: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 115: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 116: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 117: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 118: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 119: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 120: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p> Answer 121: <p>No, there is no better notation - the double-integral notation is standard. However, the way you've written it is problematic. Notice that when you do an indefinite integral, you get a $+c$ at the end. This is a constant, so when integrated again we have $+ct$. Evaluating from $2$ to $5$, this gives a $+3c$ at the end of your answer - which you really don't want, since your answer should be a number.</p> <p>In a double integral, the inner integral should a) always be definite and b) be with respect to a different variable than the outer integral. In your case, recall that velocity is <em>not</em> the indefinite integral of acceleration - it's $v_0 + \int_0^ta(s)ds$, where $t$ is the time. So what you want is $\int_2^5\int_0^ts^2dsdt$.</p> <p>This distinction between $s$ and $t$ is important - without it, you'll run into ambiguities as to which $t$ each $dt$ applies to.</p> Answer 122: <p>This does not quite make sense to me, due to $$ \int t^2 \,dt = \frac 13 t^3 + C $$ Your integral just gives $$ \int_2^5 \int_t \tau^2 \, d\tau\, dt = \frac 1{12} (5^4 - 2^4) + 3C $$ for some arbitrary $C$, hence the result is just any(!) number. The result $23$ is as fine as $\pi$, just different and equally justified choices of $C$. </p> <p>So I'd stick to an double integral with limits, so variable limits on the inner one. It is misleading to use the same integration variable twice, the standard way around is to use a "look-alike" variable name, for example the corresponding greek letter. So, a possibility is to write $$ \int_2^5 \left(u + \int_0^t \tau^2 \,d\tau\right) \, dt $$ where $u$ is the initial velocity. And if you want to stick to indefinite limits, I'd write $$ \int_2^5 \int_t \tau^2 \,d\tau \, dt $$</p> Answer 123: <p>You want to write $F(t) = \int t^{2}\text{d}t$. This is equal to:</p> <p>$F(t) = F(0) + \int_{0}^{t}s^{2}\text{d}s$ provided the expression is defined at $t=0$.</p> <p>Then, you could write:</p> <p>$$x = \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t$$</p> <p>In your case, you probably want $F(0)$ to be your initial velocity $v_{0}$. Hence, the above integral is:</p> <p>\begin{align*} x &amp;= \int_{2}^{5}\left(F(0)+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0}+\int_{0}^{t}s^{2}\text{d}s\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \left[\frac{s^{3}}{3}\right]_{0}^{t}\right)\text{d}t\\ &amp;= \int_{2}^{5}\left(v_{0} + \frac{t^{3}}{3}\right)\text{d}t\\ &amp;= \left[v_{0}t + \frac{t^4}{12}\right]_{2}^{5}\\ &amp;= v_{0}(5-2) + \frac{5^4}{12} - \frac{2^4}{12}\\ &amp;= 3 v_{0} + \frac{609}{12} \end{align*}</p> <p>(Corrected after TonyK's comment: I added the numbers instead of subtracting them).</p> Answer 124: <p>But a definite integral of an indefinite integral has a big problem: it depends on the chosen primitive. Indeed, suppose that $F'(x)=f(x)$. Then $$\int_a^b\left(\int f\right)dx=\int_a^b (F(x)+C)dx=C(b-a)+\int_a^bF(x)dx$$ Are you sure that this is what you want?</p> Answer 125: <p>You should have $\int \int a\; dt \; dt$, then the limits should be the same-both the start and end times of the acceleration. It can be confusing to use $t$ as the dummy variable and also to use the same dummy variable in both integrals. If we want the position as a function of time $t$ starting from $t_0$ with position $s_0$ and velocity $v_0$ we can use $\tau$ and $\tau'$ as dummy variables and we would get $$\int_{t_0}^t \int_{t_0}^{\tau'} \tau^2\; d\tau \; d\tau'=\int_{t_0}^t\left.\left(\frac 13\tau^3\right)\right|_{t_0}^{\tau'}\;d\tau'\\ =\int_{t_0}^t\frac 13(\tau'^3-t_0^3)\; d\tau'\\ =\frac 1{12}\tau'^4|_{t_0}^t-t_0(t-t_0)\\ =\frac 1{12}(t^4-t_0^4)-t_0(t-t_0)$$</p> Answer 126: <p>If it was a pure mathematics, it would be OK. If it is kinematics, i.e. physics, it is <strong>absurd</strong>. In physics you need to track the physical quantities denoted by every expression and keep them consistent to make a reasonable calculations.</p> <p>The expression $t^2$ is a physical quantity 'time squared', different from the acceleration's 'length divided by time squared'; the latter has an SI unit $m/s^2$ while the former is in $s^2$.</p> <p>You need at least some constant $q$ in $m/(s^4)$ for a double integration $\int\int q t^2 \,dt\,dt$ to make <strong>any</strong> sense. And then, as others said, you need to carefully define limits of integration.</p> <p>A distance $s$ travelled over time $T$ is an integral of a velocity over that time: $$s(T) = \int\limits_{t=T_0}^T v(t)\,dt $$ and a final position is an initial position $x_0=x(T_0)$ plus the distance: $$x(T) = x_0 + s(T)$$</p> <p>Next, the velocity change over time follows from the acceleration, namely it's an integral of $a$: $$v(t) = v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau$$ where $v_0=v(T_0)$ is the initial velocity.</p> <p>Now plug $v(t)$ into the first equation: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t a(\tau)\,d\tau\right)\,dt $$</p> <p>Add the acceleration definition and the integration expression is complete: $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau\right)\,dt $$</p> <p>Work it from inside out: $$\int q\tau^2\,d\tau = \frac 13 q\tau^3$$ so $$v(t) = v_0 + \int\limits_{\tau=T_0}^t q\tau^2\,d\tau = v_0 + \left[\frac 13q\tau^3\right]_{\tau=T_0}^t = v_0 + \frac q3(t^3-{T_0}^3)$$ then $$x(T) = x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3(t^3-{T_0}^3)\right)\,dt $$ $$= x_0 + \int\limits_{t=T_0}^T \left(v_0 + \frac q3 t^3- \frac q3 {T_0}^3\right)\,dt $$ $$= x_0 + \left[\frac q{12} t^4 + \left(v_0 - \frac q3 {T_0}^3\right)t\right]_{t=T_0}^T $$ $$= x_0 + \frac q{12} (T^4 - {T_0}^4) + \left(v_0 - \frac q3 {T_0}^3\right)(T-T_0) $$</p> <p>Assuming your frame of reference has been chosen so that $x_0=0\,m$ and $v_0=0\,\frac ms$, the expression simplifies to $$x(T) = \frac q{12} (T^4 - {T_0}^4) - \frac q3 {T_0}^3 (T-T_0) $$ For $T=5\,s$ and $T_0=2\,s$: $$x(5\,s) = \frac q{12} ((5\,s)^4 - (2\,s)^4) - \frac q3 (2\,s)^3 (5\,s-2\,s) $$ $$ = \frac q{12} (625 - 16)s^4 - \frac q3 8\cdot 3\, s^4 $$ $$ = \left(\frac {625 - 16}{12} - 8\right)q\,s^4 $$ $$ = 42.75\, q\,s^4$$ If $q = 1\,\frac m{s^4}$, as I suppose from your attempt to write the double integral: $$ x(5\,s) = 42.75\,m$$</p>
Why does multiplication act like scaling and rotation of a vector in the complex plane?
2,085,375
12
https://math.stackexchange.com/questions/2085375/why-does-multiplication-act-like-scaling-and-rotation-of-a-vector-in-the-complex
<p>I regularly use the geometric analogy of multiplication by a complex number to represent a scaling and rotation of a vector in the complex plane. For a very simple example, <code>i</code> would point up along the Y axis and multiplying it by <code>i</code> again would be a 90 degree rotation resulting in something pointing in the -X direction.</p> <p>The thing is, I no longer recall why this is true. It's not obvious to me any longer why multiplication is in any way connected to rotation (scaling seems fairly obvious) and I was unable to explain the logic behind this useful trick to a friend who asked why it worked.</p> <p>Could I get a very clear explanation of this geometric interpretation of multiplication by complex numbers? I feel like it had to do with Euler's identity and the polar form of complex numbers but this math is quite a few years behind me.</p>
[ { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" }, { "answer_id": 2085385, "score": 13, "text": "<p>Every complex number can be written in the form $r e^{i \\theta}$. This follows from Euler's identity\n$$\ne^{i \\theta} = \\cos(\\theta) + i \\sin(\\theta).\n$$\nFor a given complex number $z$, you can see how to pick $\\theta$ and $r$ so that\n$z = r e^{i \\theta}$. (By the way, you can prove Euler's identity by plugging $i \\theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p>\n\n<p>So, if $z_1 = r_1 e^{i \\theta_1}$ and $z_2 = r_2 e^{i \\theta_2}$, then\n$$\nz_1 z_2 = r_1 r_2 e^{i(\\theta_1 + \\theta_2)}.\n$$\nIn other words, to multiply two complex numbers, we \"add the angles and multiply the lengths\".</p>\n\n<p><strong>Edit:</strong></p>\n\n<p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form\n$$\nz = r(\\cos \\theta + i \\sin \\theta).\n$$\nSo, suppose that $z_1 = r_1 (\\cos \\theta_1 + i \\sin \\theta_1)$\nand $z_2 = r_2 (\\cos \\theta_2 + i \\sin \\theta_2)$.\nNow multiply $z_1$ and $z_2$:\n$$\nz_1 z_2 = r_1 r_2 (\\cos \\theta_1 \\cos \\theta_2 - \\sin \\theta_1 \\sin \\theta_2)\n+ i r_1 r_2(\\cos \\theta_1 \\sin \\theta_2 + \\sin \\theta_1 \\cos \\theta_2).\n$$\nIf you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that\n$$\nz_1 z_2 = r_1 r_2( \\cos(\\theta_1 + \\theta_2) + i \\sin(\\theta_1 + \\theta_2)).\n$$\nThis shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p>\n\n<p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \\theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p>\n" }, { "answer_id": 2085657, "score": 8, "text": "<p>$\\newcommand{\\I}{\\mathrm i}$Every complex number $z = x + y\\I$ can be represented by a matrix:\n$$ \\mathbf M = \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\n$$\nIt is easily checked that these matrices follow all rules of complex numbers. In particular, $\\det \\mathbf M = |z|^2$. If $w = u + v\\I$ is another complex number, we have\n$$ \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}\\begin{bmatrix}u\\\\v\\end{bmatrix} = \\begin{bmatrix} ux - vy\\\\uy + vx\\end{bmatrix}\n$$\nwhich is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p>\n\n<p>Every such $\\mathbf M$ can be factored into a rotation matrix $\\mathbf R$ and a scaling matrix $\\mathbf S$, with (The case of $z = 0$ is excluded)\n$$\n\\mathbf{R} = \\frac{1}{|z|} \\begin{bmatrix} x &amp;-y\\\\y&amp;x \\end{bmatrix}, \\qquad \\mathbf{S} = \\begin{bmatrix} |z|&amp;0\\\\0&amp;|z| \\end{bmatrix}\n$$\nThen $\\mathbf M = \\mathbf R\\mathbf S$.</p>\n\n<p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p>\n" }, { "answer_id": 2085383, "score": 3, "text": "<p>Compute $e^{it}\\cdot e^{is}$ two ways:</p>\n\n<ul>\n<li>$e^{i(t+s)}=\\cos(t+s)+i\\sin(t+s)$</li>\n<li>$(\\cos t+i\\sin t)(\\cos s+i\\sin s)=(\\cos t\\cos s-\\sin t\\sin s)+i(\\sin t\\cos s+\\cos t\\sin s)$</li>\n</ul>\n" }, { "answer_id": 2085722, "score": 2, "text": "<p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p>\n\n<p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which \"stands in relation\" to $a$ as $b$ \"stands in relation\" to $1$. Here \"stands is relation\" refers to the ratio of the lengths so in formulas this is just the statement that $\\frac{ab}{a} = \\frac{b}{1}$.</p>\n\n<p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that \"stands in relation\" to $z_2$ as $z_1$ \"stands in relation\" to $1$. </p>\n\n<p>Let's first define what we mean by \"stands in relation\" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\\theta)$ where $r$ is the length of the vector and $\\theta$ is the angle relative to unit \"$1$\" (the $x$-axis). </p>\n\n<p>The relation of the number $z_1 = (r_1,\\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\\theta = 0$) by $\\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\\theta = \\theta_2$) by $\\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\\theta_1+\\theta_2)$ which is just the complex multiplication rule in disguise.</p>\n\n<p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href=\"https://i.stack.imgur.com/1T5Dt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1T5Dt.png\" alt=\"enter image description here\"></a></p>\n\n<p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to \"discover\" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define \"$i$\" as $\\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about \"$i$\").</p>\n" }, { "answer_id": 2085411, "score": 1, "text": "<p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p>\n\n<p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p>\n" }, { "answer_id": 2085453, "score": 1, "text": "<p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p>\n\n<p>What about multiplying by complex number $x$ where:</p>\n\n<p>$x^n=1$</p>\n\n<p>That's the equivalent of a $\\frac{360k}{n}$ degree rotation because doing it $\"n\"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p>\n\n<p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p>\n" }, { "answer_id": 2085561, "score": 0, "text": "<p>Everything about how the multiplication of complex numbers works is a consequence of $(\\pm i)^2 = -1$, i.e.,</p>\n\n<p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p>\n\n<p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>\n" } ]
Question: Why does multiplication act like scaling and rotation of a vector in the complex plane? Answer 1: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 2: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 3: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 4: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 5: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 6: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 7: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 8: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 9: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 10: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 11: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 12: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 13: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 14: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 15: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 16: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 17: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 18: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 19: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 20: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 21: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 22: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 23: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 24: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 25: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 26: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 27: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 28: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 29: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 30: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 31: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 32: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 33: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 34: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 35: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 36: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 37: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 38: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 39: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 40: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 41: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 42: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 43: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 44: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 45: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 46: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 47: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 48: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 49: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 50: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 51: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 52: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 53: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 54: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 55: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 56: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 57: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 58: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 59: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 60: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 61: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 62: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 63: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 64: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 65: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 66: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 67: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 68: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 69: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 70: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 71: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 72: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 73: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 74: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 75: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 76: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 77: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 78: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 79: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 80: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 81: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 82: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 83: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 84: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 85: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 86: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 87: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 88: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 89: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 90: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 91: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 92: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 93: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 94: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 95: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 96: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 97: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 98: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 99: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 100: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 101: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 102: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 103: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 104: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 105: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 106: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 107: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 108: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 109: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 110: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 111: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 112: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 113: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 114: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 115: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 116: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 117: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 118: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 119: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 120: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 121: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 122: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 123: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 124: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 125: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 126: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 127: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 128: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 129: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 130: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 131: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 132: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 133: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 134: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 135: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 136: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 137: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 138: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 139: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 140: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p> Answer 141: <p>Every complex number can be written in the form $r e^{i \theta}$. This follows from Euler's identity $$ e^{i \theta} = \cos(\theta) + i \sin(\theta). $$ For a given complex number $z$, you can see how to pick $\theta$ and $r$ so that $z = r e^{i \theta}$. (By the way, you can prove Euler's identity by plugging $i \theta$ into the Taylor series for $e^x$ -- it's one of the most fun calculations in math.)</p> <p>So, if $z_1 = r_1 e^{i \theta_1}$ and $z_2 = r_2 e^{i \theta_2}$, then $$ z_1 z_2 = r_1 r_2 e^{i(\theta_1 + \theta_2)}. $$ In other words, to multiply two complex numbers, we "add the angles and multiply the lengths".</p> <p><strong>Edit:</strong></p> <p>Here is an alternative answer that avoids using Euler's identity. Even without Euler's identity, it's clear that any complex number $z$ can be written in the form $$ z = r(\cos \theta + i \sin \theta). $$ So, suppose that $z_1 = r_1 (\cos \theta_1 + i \sin \theta_1)$ and $z_2 = r_2 (\cos \theta_2 + i \sin \theta_2)$. Now multiply $z_1$ and $z_2$: $$ z_1 z_2 = r_1 r_2 (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i r_1 r_2(\cos \theta_1 \sin \theta_2 + \sin \theta_1 \cos \theta_2). $$ If you remember the addition formulas from trigonometry, we recognize that they have miraculously appeared on the right hand side. So we have discovered that $$ z_1 z_2 = r_1 r_2( \cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)). $$ This shows, again, that when we multiply complex numbers we add the angles and multiply the lengths.</p> <p>Comments: One might object that in this alternative proof, the appearance of the sum formulas for sine and cosine seems like a miracle. So I probably prefer the first proof that uses Euler's identity. However, one might object that Euler's identity is itself a miracle, because when we plug $i \theta$ into the Taylor series for $e^x$, we are shocked to find that sine and cosine pop out. (At least, I don't see why I shouldn't be shocked.) When things work out this nicely, we know we have stumbled upon something perfect and beautiful.</p> Answer 142: <p>$\newcommand{\I}{\mathrm i}$Every complex number $z = x + y\I$ can be represented by a matrix: $$ \mathbf M = \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix} $$ It is easily checked that these matrices follow all rules of complex numbers. In particular, $\det \mathbf M = |z|^2$. If $w = u + v\I$ is another complex number, we have $$ \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix} = \begin{bmatrix} ux - vy\\uy + vx\end{bmatrix} $$ which is precisely the product $zw$. This shows complex numbers can <em>both be treated as a vector on the plane and a linear transformation</em>.</p> <p>Every such $\mathbf M$ can be factored into a rotation matrix $\mathbf R$ and a scaling matrix $\mathbf S$, with (The case of $z = 0$ is excluded) $$ \mathbf{R} = \frac{1}{|z|} \begin{bmatrix} x &amp;-y\\y&amp;x \end{bmatrix}, \qquad \mathbf{S} = \begin{bmatrix} |z|&amp;0\\0&amp;|z| \end{bmatrix} $$ Then $\mathbf M = \mathbf R\mathbf S$.</p> <p>Hence every complex multiplication can be seen as <em>applying a rotation and a scaling to a vector on the plane</em>.</p> Answer 143: <p>Compute $e^{it}\cdot e^{is}$ two ways:</p> <ul> <li>$e^{i(t+s)}=\cos(t+s)+i\sin(t+s)$</li> <li>$(\cos t+i\sin t)(\cos s+i\sin s)=(\cos t\cos s-\sin t\sin s)+i(\sin t\cos s+\cos t\sin s)$</li> </ul> Answer 144: <p>Here is a different way to get some intuition on why the multiplication rule of the complex number is as it is from a purely geometrical point of view.</p> <p>The real numbers can be represented by line-segments (as in Euclidean geometry) and in this representation multiplication can be viewed geometrically as follows: the line-segment of length $ab$ is a line-segment which "stands in relation" to $a$ as $b$ "stands in relation" to $1$. Here "stands is relation" refers to the ratio of the lengths so in formulas this is just the statement that $\frac{ab}{a} = \frac{b}{1}$.</p> <p>The same type of geometrical interpretation lies behind complex multiplication: $z_1z_2$ is the number that "stands in relation" to $z_2$ as $z_1$ "stands in relation" to $1$. </p> <p>Let's first define what we mean by "stands in relation" here. For this purpose we will represent the complex numbers as pairs (think vectors in the plane) of real numbers $z = (r,\theta)$ where $r$ is the length of the vector and $\theta$ is the angle relative to unit "$1$" (the $x$-axis). </p> <p>The relation of the number $z_1 = (r_1,\theta_1)$ to $1 = (1,0)$ is the following: we scale the length ($1$) by $r_1$ and rotate (from $\theta = 0$) by $\theta_1$. Likewise the relation of $z_1z_2$ relative to $z_2$ is: scale the length ($r_2$) by $r_1$ and rotate (from $\theta = \theta_2$) by $\theta_1$. This is illustrated below and we see that this leads to $z_1z_2 = (r_1r_2,\theta_1+\theta_2)$ which is just the complex multiplication rule in disguise.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/1T5Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1T5Dt.png" alt="enter image description here"></a></p> <p>In fact this procedure, generalizing multiplication of scalars to two dimensional vectors in the way we did above, is a way to "discover" the complex numbers (or more technically a ring with the same algebraic structure as the complex numbers) without even having to define "$i$" as $\sqrt{-1}$ (making it a good formulation to convince people that there is nothing magical about "$i$").</p> Answer 145: <p>The most simple way to explain this is by raising $i$ to various powers. Begin with the coordinate $0+i$ in Cartesian form. It will be pointing upwards and lie on the y axis. Multiplying by $i$ from here is simple using this example, since $i*(0+i) = i^2 = -1$. The resulting graph no longer has $i$, so it only lies on the x axis. Further multiplications goes through the exponent cycle, rotating the vector.</p> <p>More generally, further multiplications by $i$ cycle through $i, -1, -i, 1$ which match quadrants on the imaginary plane.</p> Answer 146: <p>If we multiply by $1$ we end up in the same place because we get the same (complex) number as $1$ is the multiplicative identity. Geometrically we rotated $360$ degrees, back to the same place. If we multiply by $-1$ twice, that is the same as multiplying by $1$. In other words multiplication by $-1$ results in a $180$ degree rotation because doing it a minimum of twice results in a $360$ degree rotation. If we multiply by $i$ four times then that is the same as multiplying by $i^4=1$, so multiplying by $i$ results in a $90$ degree as doing it a minimum of $4$ times is the equivalent of multiplying by $1$.</p> <p>What about multiplying by complex number $x$ where:</p> <p>$x^n=1$</p> <p>That's the equivalent of a $\frac{360k}{n}$ degree rotation because doing it $"n"$ times results in multiplication by $x^n=1$ ,a $360k$ degree rotation. That is where $k$ is an integer. </p> <p>Of course multiplying a vector by a constant scales it, likewise with complex numbers. And we can always scale a nonzero complex number so that it lies somewhere on the unit circle, all solutions to $x^n=1$ lie on the unit circle as $|x|=1$. </p> Answer 147: <p>Everything about how the multiplication of complex numbers works is a consequence of $(\pm i)^2 = -1$, i.e.,</p> <p>$$(a+bi)(c+di) = ac +(ad+bc)i + bdi^2 = ac-bd + (ad+bc)i$$</p> <p>If you plot the factors and the product on Cartesian plane with real and imaginary axes, it just works out that the angle and length of the product are the sum and product of the angles and lengths of the factors. </p>
Show that if $2+2\sqrt{28n^2+1}$ is an integer then it must be perfect square.
2,090,665
12
https://math.stackexchange.com/questions/2090665/show-that-if-22-sqrt28n21-is-an-integer-then-it-must-be-perfect-square
<p>As written in title, I want to prove that </p> <blockquote> <p>If $n$ is an integer, show that if $2+2\sqrt{28n^2+1}$ is an integer than it must be perfect square. </p> </blockquote> <p>I m struggling in making a start . Please help.</p>
[ { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" }, { "answer_id": 2090671, "score": 15, "text": "<p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p>\n\n<p>Notice that $2+2\\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p>\n\n<p>Now, $$28n^2=m^2-1=(m+1)(m-1)\\implies 7n^2=(\\frac{m+1}{2})(\\frac{m-1}{2})$$</p>\n\n<p>Hence, $(\\frac{m+1}{2})=7a^2$,$(\\frac{m-1}{2})=b^2$ or $(\\frac{m+1}{2})=b^2$, $(\\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\\frac{m+1}{2})$ and $(\\frac{m-1}{2})$ are consecutive integers and hence coprime</p>\n\n<p>If $\\frac{m+1}{2}=7a^2$ and $\\frac{m-1}{2}=b^2$ then $b^2\\equiv-1\\mod (7)$, a contradiction.</p>\n\n<p>Hence, $\\frac{m-1}{2}=7a^2$ and $\\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p>\n" }, { "answer_id": 2786588, "score": 2, "text": "<p>This stuff comes from a Pell equation. </p>\n\n<p>It seems likely that <a href=\"https://en.wikipedia.org/wiki/John_Pell\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p>\n\n<p><strong>Au fait, je m'appelle Pell.</strong></p>\n\n<p>$$ $$</p>\n\n<p>For the curious, the values of $n$ that work are\n$$ 0, \\; \\; 24, \\; \\; 6096, \\; \\; 1548360, \\; \\; 393277344, \\; \\; \\cdots $$\nsatisfying\n$$ n_{j+2} = 254 n_{j+1} - n_j $$\nand $w = \\sqrt{2 + 2 \\sqrt{28 n^2 + 1}}$\n$$ 2, \\; \\; 16, \\; \\; 254, \\; \\; 4048, \\; \\; 64514, \\; \\; \\cdots $$\nsatisfying\n$$ w_{j+2} = 16 w_{j+1} - w_j $$</p>\n" }, { "answer_id": 2786645, "score": 2, "text": "<p>Here's another answer which uses the Pell equation.</p>\n\n<p>As mentioned above, in order for $2 + 2\\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p>\n\n<p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p>\n\n<p>All solutions to this equation are of the form $x_n+y_n\\sqrt{28} = \\pm\\left(127+24\\sqrt{28}\\right)^n, n \\in \\mathbb{Z}$.</p>\n\n<p>This means $x_n = \\pm\\frac{1}{2}\\left[\\left(127+24\\sqrt{28}\\right)^n+\\left(127-24\\sqrt{28}\\right)^n\\right]$.</p>\n\n<p>Also note that $2 + 2\\sqrt{28y_n^2 + 1} = 2+2\\sqrt{x_n^2} = 2+2|x_n| = 2 + \\left(127+24\\sqrt{28}\\right)^n + \\left(127-24\\sqrt{28}\\right)^n$.</p>\n\n<p>But $127 + 24\\sqrt{28} = \\left(8+3\\sqrt{7}\\right)^2$ and $\\left(8+3\\sqrt{7}\\right)\\left(8-3\\sqrt{7}\\right) = 1$ so we finally obtain:\n$$2+2\\sqrt{28y_n^2+1} = \\left(8+3\\sqrt{7}\\right)^{2n} + 2\\left(8+3\\sqrt{7}\\right)^n\\left(8-3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^{2n} = \\left[\\left(8+3\\sqrt{7}\\right)^n+\\left(8-3\\sqrt{7}\\right)^n\\right]^2$$</p>\n\n<p>which is the square of an integer (you can check with the binomial theorem).</p>\n" } ]
Question: Show that if $2+2\sqrt{28n^2+1}$ is an integer then it must be perfect square. Answer 1: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 2: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 3: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 4: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 5: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 6: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 7: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 8: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 9: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 10: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 11: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 12: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 13: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 14: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 15: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 16: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 17: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 18: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 19: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 20: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 21: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 22: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 23: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 24: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 25: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 26: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 27: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 28: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 29: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 30: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 31: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 32: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 33: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 34: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 35: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 36: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 37: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 38: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 39: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 40: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 41: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 42: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 43: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 44: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 45: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 46: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 47: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 48: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 49: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 50: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 51: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 52: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 53: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 54: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 55: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 56: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 57: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 58: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 59: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 60: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p> Answer 61: <p>With suggestions of @barak manos and @ Alqatrkapa I Noted that I can improve my post. </p> <p>Notice that $2+2\sqrt{28n^2+1}$ is an even integer. Also, $28n^2+1$ is a perfect square of an odd integer say $m$ (Because $28n^2+1$ is odd itself). </p> <p>Now, $$28n^2=m^2-1=(m+1)(m-1)\implies 7n^2=(\frac{m+1}{2})(\frac{m-1}{2})$$</p> <p>Hence, $(\frac{m+1}{2})=7a^2$,$(\frac{m-1}{2})=b^2$ or $(\frac{m+1}{2})=b^2$, $(\frac{m-1}{2})=7a^2$. This is because $7n^2$ is $7$ times of a square and thus the right side is also $7$ times a perfect square. This is only possible when one of them is $7$ times of a square and other is simply a square as $(Square * Square=Square)$, you can say that there is possibility that they both are not squares but the product is (like $2*8=16=4^2$), but notice that $(\frac{m+1}{2})$ and $(\frac{m-1}{2})$ are consecutive integers and hence coprime</p> <p>If $\frac{m+1}{2}=7a^2$ and $\frac{m-1}{2}=b^2$ then $b^2\equiv-1\mod (7)$, a contradiction.</p> <p>Hence, $\frac{m-1}{2}=7a^2$ and $\frac{m+1}{2}=b^2$. Hence, $2+2m=4b^2$ a perfect square.</p> Answer 62: <p>This stuff comes from a Pell equation. </p> <p>It seems likely that <a href="https://en.wikipedia.org/wiki/John_Pell" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/John_Pell</a> was in France on many occasions, in which case he may well have said</p> <p><strong>Au fait, je m'appelle Pell.</strong></p> <p>$$ $$</p> <p>For the curious, the values of $n$ that work are $$ 0, \; \; 24, \; \; 6096, \; \; 1548360, \; \; 393277344, \; \; \cdots $$ satisfying $$ n_{j+2} = 254 n_{j+1} - n_j $$ and $w = \sqrt{2 + 2 \sqrt{28 n^2 + 1}}$ $$ 2, \; \; 16, \; \; 254, \; \; 4048, \; \; 64514, \; \; \cdots $$ satisfying $$ w_{j+2} = 16 w_{j+1} - w_j $$</p> Answer 63: <p>Here's another answer which uses the Pell equation.</p> <p>As mentioned above, in order for $2 + 2\sqrt{28y^2+1}$ to be an integer, it is necessary and sufficient that $28y^2 + 1$ is a perfect square.</p> <p>Hence we must look at the solutions to Pell's equation $x^2 - 28y^2 = 1$.</p> <p>All solutions to this equation are of the form $x_n+y_n\sqrt{28} = \pm\left(127+24\sqrt{28}\right)^n, n \in \mathbb{Z}$.</p> <p>This means $x_n = \pm\frac{1}{2}\left[\left(127+24\sqrt{28}\right)^n+\left(127-24\sqrt{28}\right)^n\right]$.</p> <p>Also note that $2 + 2\sqrt{28y_n^2 + 1} = 2+2\sqrt{x_n^2} = 2+2|x_n| = 2 + \left(127+24\sqrt{28}\right)^n + \left(127-24\sqrt{28}\right)^n$.</p> <p>But $127 + 24\sqrt{28} = \left(8+3\sqrt{7}\right)^2$ and $\left(8+3\sqrt{7}\right)\left(8-3\sqrt{7}\right) = 1$ so we finally obtain: $$2+2\sqrt{28y_n^2+1} = \left(8+3\sqrt{7}\right)^{2n} + 2\left(8+3\sqrt{7}\right)^n\left(8-3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^{2n} = \left[\left(8+3\sqrt{7}\right)^n+\left(8-3\sqrt{7}\right)^n\right]^2$$</p> <p>which is the square of an integer (you can check with the binomial theorem).</p>
Must an injective or surjective map in an infinite dimensional vector space be a bijection?
2,091,104
12
https://math.stackexchange.com/questions/2091104/must-an-injective-or-surjective-map-in-an-infinite-dimensional-vector-space-be-a
<p>If we have some finite dimensional vector space <em>V</em> and linear transformation $F: V\to V$, and we know that <em>F</em> is injective, we immediately know that it is also bijective (same goes if we know that <em>F</em> is surjective).<br/> I'm curious if a same rule applies if <em>V</em> is infinite dimensional vector space, and we know that <em>F</em> is injective/surjective, does it again immediately imply that <em>F</em> is also bijective (intuitively I think it does)?</p>
[ { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" }, { "answer_id": 2091106, "score": 24, "text": "<p>No, it does not. Consider the space $\\ell_\\infty$ of bounded sequences of real numbers. The map</p>\n\n<p>$$S:\\ell_\\infty\\to\\ell_\\infty:\\langle x_0,x_1,x_2,\\ldots\\rangle\\mapsto\\langle 0,x_0,x_1,x_2,\\ldots\\rangle$$</p>\n\n<p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p>\n" }, { "answer_id": 2091107, "score": 13, "text": "<p>No it does not, take for example the linear operator $D$, the derivative map from</p>\n\n<p>$$C^\\infty(\\Bbb R)\\to C^\\infty(\\Bbb R)$$</p>\n\n<p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p>\n" }, { "answer_id": 2091131, "score": 6, "text": "<p>Another counter-example, taken from algebra:</p>\n\n<p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p>\n" }, { "answer_id": 2091108, "score": 4, "text": "<p>It does not.</p>\n\n<p>Take the space of the real sequences, and the transformation </p>\n\n<p>$T(u) = v$ where $v_n = u_{n+1}$</p>\n\n<p>Then $T$ is surjective, but it's not injective.</p>\n" }, { "answer_id": 2091478, "score": 4, "text": "<p>Abstract example: take a basis $(b_i)_{i\\in I}$ of the space. Let be $J\\subset I$ with $|J| = |I|$ and $\\sigma:I\\longrightarrow J$ a bijection. The linear function defined by\n$$S(b_i) = b_{\\sigma(i)},$$\nis injective but not surjective while\n$$T(b_i) = b_{\\sigma^{-1}(i)},i\\in J,$$\n$$T(b_i) = 0,i\\in I\\setminus J,$$\nis surjective but not injective.</p>\n" }, { "answer_id": 2091160, "score": 2, "text": "<p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>\n" } ]
Question: Must an injective or surjective map in an infinite dimensional vector space be a bijection? Answer 1: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 2: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 3: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 4: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 5: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 6: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 7: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 8: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 9: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 10: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 11: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 12: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 13: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 14: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 15: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 16: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 17: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 18: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 19: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 20: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 21: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 22: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 23: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 24: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 25: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 26: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 27: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 28: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 29: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 30: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 31: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 32: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 33: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 34: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 35: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 36: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 37: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 38: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 39: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 40: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 41: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 42: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 43: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 44: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 45: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 46: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 47: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 48: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 49: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 50: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 51: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 52: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 53: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 54: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 55: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 56: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 57: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 58: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 59: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 60: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 61: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 62: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 63: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 64: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 65: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 66: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 67: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 68: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 69: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 70: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 71: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 72: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 73: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 74: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 75: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 76: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 77: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 78: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 79: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 80: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 81: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 82: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 83: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 84: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 85: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 86: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 87: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 88: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 89: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 90: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 91: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 92: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 93: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 94: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 95: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 96: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 97: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 98: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 99: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 100: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 101: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 102: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 103: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 104: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 105: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 106: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 107: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 108: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 109: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 110: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 111: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 112: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 113: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 114: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 115: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 116: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 117: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 118: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 119: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 120: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p> Answer 121: <p>No, it does not. Consider the space $\ell_\infty$ of bounded sequences of real numbers. The map</p> <p>$$S:\ell_\infty\to\ell_\infty:\langle x_0,x_1,x_2,\ldots\rangle\mapsto\langle 0,x_0,x_1,x_2,\ldots\rangle$$</p> <p>that shifts each sequence one term to the right and adds a leading $0$ term is linear, injective, and clearly not surjective.</p> Answer 122: <p>No it does not, take for example the linear operator $D$, the derivative map from</p> <p>$$C^\infty(\Bbb R)\to C^\infty(\Bbb R)$$</p> <p>Then this map is clearly surjective by the FTC, but it is not injective as the derivative of any constant is $0$, so the null-space is non-trivial.</p> Answer 123: <p>Another counter-example, taken from algebra:</p> <p>Let $K$ be a field, and consider the infinite dimensional $K$-vector space of polynomials in one indeterminate, $V=K[X]$. Multiplication by $X$ is a linear map, which is injective, but not surjective, since polynomials with a non-zero constant term are not attained.</p> Answer 124: <p>It does not.</p> <p>Take the space of the real sequences, and the transformation </p> <p>$T(u) = v$ where $v_n = u_{n+1}$</p> <p>Then $T$ is surjective, but it's not injective.</p> Answer 125: <p>Abstract example: take a basis $(b_i)_{i\in I}$ of the space. Let be $J\subset I$ with $|J| = |I|$ and $\sigma:I\longrightarrow J$ a bijection. The linear function defined by $$S(b_i) = b_{\sigma(i)},$$ is injective but not surjective while $$T(b_i) = b_{\sigma^{-1}(i)},i\in J,$$ $$T(b_i) = 0,i\in I\setminus J,$$ is surjective but not injective.</p> Answer 126: <p>Here's another example of a surjective but not bijective map, also on $V=K[X]$, the vector space of polynomials over a field. Taking the formal derivative of a polynomial is a surjective linear map, but not a bijection since for example all constant polynomials are mapped to zero.</p>
What does it mean for an element of a group to be a function on the group?
2,096,841
12
https://math.stackexchange.com/questions/2096841/what-does-it-mean-for-an-element-of-a-group-to-be-a-function-on-the-group
<p>I am trying to learn group theory for my physics research. I'm working through these notes: <a href="http://www.damtp.cam.ac.uk/user/ho/GNotes.pdf" rel="nofollow noreferrer">http://www.damtp.cam.ac.uk/user/ho/GNotes.pdf</a></p> <p>On page 3, the author states:</p> <p>"It is straightforward to see that the set of all automorphisms of $G$ itself forms a group $\operatorname{Aut} G$ which must include $G/\mathcal{Z}(G)$ as a normal subgroup."</p> <p>This confuses me; there seems to be a type error. An element of $\operatorname{Aut} G$ is a function $\phi: G \rightarrow G$. An element of $G/\mathcal{Z}(G)$ is an element of $G.$ So the latter group cannot be a subset of the former group, hence the author must be assuming some identification between group elements and functions on group elements.</p> <p><strong><em>What identification is the author likely assuming?</em></strong> He references defining what he calls an "inner automorphism" by "$\phi_g (g_i) = g g_i g^{-1},$" and this seems like the identification I'm looking for, but I am uncomfortable with the fact that he's called this by the name "inner automorphism." </p>
[ { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" }, { "answer_id": 2096849, "score": 14, "text": "<p><strong>Note:</strong> Each element $a \\in G$ defines an automorphishm\n$$\\phi_a : G \\to G \\,;\\, \\phi_a(x)=axa^{-1}$$</p>\n\n<p>The mapping \n$$F : G \\to Aut(G) \\,;\\, F(a)=\\phi_a$$\nis a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism:\n$$F : G / \\ker(F) \\to Im(F) \\leq Aut(G)$$\nand it is straight forward to check $\\ker(F)=\\mathcal{Z}(G)$. </p>\n\n<p>Now, via this \"natural\" mapping, $G/\\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be \"the same\".</p>\n" }, { "answer_id": 2096852, "score": 7, "text": "<p>It would be better</p>\n\n<blockquote>\n <p>which includes a normal subgroup isomorphic to $G/\\mathcal{Z}(G)$</p>\n</blockquote>\n\n<p>Given $g\\in G$, you can consider the inner automorphism $\\phi_g$ defined by\n$$\n\\phi_g(x)=gxg^{-1}\n$$\nIt is easy to see this is indeed an automorphism of $G$. Moreover\n$$\n\\phi_g\\circ\\phi_h=\\phi_{gh}\n$$\nso the map $\\phi\\colon G\\to\\operatorname{Aut}G$ defined by $g\\mapsto \\phi_g$ is a group homomorphism. Its kernel is $\\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism\n$$\nG/\\mathcal{Z}(G)\\to\\operatorname{Aut}G\n$$\nIt remains to show the image is a normal subgroup.</p>\n" }, { "answer_id": 2096861, "score": 4, "text": "<p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p>\n\n<p>There is a canonical homomorphism $\\begin{aligned}[t]\\varphi \\colon G&amp;\\longrightarrow \\operatorname{Aut}G\\\\g&amp;\\longmapsto(x\\mapsto gxg^{-1})\\end{aligned}$\n The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\\;G/Z(G)\\hookrightarrow \\operatorname{Aut}G$.</p>\n\n<p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\\varphi(g)$.</p>\n" } ]
Question: What does it mean for an element of a group to be a function on the group? Answer 1: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 2: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 3: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 4: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 5: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 6: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 7: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 8: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 9: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 10: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 11: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 12: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 13: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 14: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 15: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 16: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 17: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 18: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 19: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 20: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 21: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 22: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 23: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 24: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 25: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 26: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 27: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 28: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 29: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 30: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 31: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 32: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 33: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 34: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 35: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 36: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 37: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 38: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 39: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 40: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 41: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 42: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 43: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 44: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 45: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 46: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 47: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 48: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 49: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 50: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 51: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 52: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 53: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 54: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 55: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 56: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 57: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 58: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 59: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 60: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p> Answer 61: <p><strong>Note:</strong> Each element $a \in G$ defines an automorphishm $$\phi_a : G \to G \,;\, \phi_a(x)=axa^{-1}$$</p> <p>The mapping $$F : G \to Aut(G) \,;\, F(a)=\phi_a$$ is a group homomorphism. Therefore, by the first isomorphism Theorem, we get an isomorphism: $$F : G / \ker(F) \to Im(F) \leq Aut(G)$$ and it is straight forward to check $\ker(F)=\mathcal{Z}(G)$. </p> <p>Now, via this "natural" mapping, $G/\mathcal{Z}(G)$ can be identified (is isomorphic) with a subgroup of $Aut(G)$, and we usually consider them to be "the same".</p> Answer 62: <p>It would be better</p> <blockquote> <p>which includes a normal subgroup isomorphic to $G/\mathcal{Z}(G)$</p> </blockquote> <p>Given $g\in G$, you can consider the inner automorphism $\phi_g$ defined by $$ \phi_g(x)=gxg^{-1} $$ It is easy to see this is indeed an automorphism of $G$. Moreover $$ \phi_g\circ\phi_h=\phi_{gh} $$ so the map $\phi\colon G\to\operatorname{Aut}G$ defined by $g\mapsto \phi_g$ is a group homomorphism. Its kernel is $\mathcal{Z}(G)$, so the homomorphism theorem says we have an injective homomorphism $$ G/\mathcal{Z}(G)\to\operatorname{Aut}G $$ It remains to show the image is a normal subgroup.</p> Answer 63: <p>You're right: it results from this identification. More precisely, any group acts on itself by <em>inner automorphisms</em> (one also says by <em>conjugation</em>).</p> <p>There is a canonical homomorphism $\begin{aligned}[t]\varphi \colon G&amp;\longrightarrow \operatorname{Aut}G\\g&amp;\longmapsto(x\mapsto gxg^{-1})\end{aligned}$ The kernel of this homomorphism is, by definition, the centre of $G$, $Z(G)$, so that we obtain an embedding $\;G/Z(G)\hookrightarrow \operatorname{Aut}G$.</p> <p>The image (the inner automorphisms) are a <em>normal subgroup</em> of $\operatorname{Aut}G$, because one easily checks that, if $f$ is an automorphism $G$, the conjugate of the inner automorphism induced by $g$ is the inner automorphism induced by $\varphi(g)$.</p>
Continuous functions define a topology?
2,101,750
12
https://math.stackexchange.com/questions/2101750/continuous-functions-define-a-topology
<p>The WP article on general topology has a section titled "<a href="https://en.wikipedia.org/wiki/General_topology#Defining_topologies_via_continuous_functions">Defining topologies via continuous functions</a>," which says,</p> <blockquote> <p>given a set S, specifying the set of continuous functions $S \rightarrow X$ into all topological spaces X defines a topology [on S].</p> </blockquote> <p>The first thing that bothered me about this was that clearly this collection of continuous functions is a proper class, not a set. Is there a way of patching up this statement so that it literally makes sense, and if so, how would one go about proving it?</p> <p>The same section of the article has this:</p> <blockquote> <p>for a function f from a set S to a topological space, the initial topology on S has as open subsets A of S those subsets for which f(A) is open in X.</p> </blockquote> <p>This confuses me, because it seems that this does not necessarily define a topology. For example, let S be a set with two elements, and let f be a function that takes these elements to two different points on the real line. Then f(S) is not open, which means that S is not an open set in S, but that violates one of the axioms of a topological space. Am I just being stupid because I haven't had enough coffee this morning?</p>
[ { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" }, { "answer_id": 2101762, "score": 16, "text": "<p>The first statement can be patched up using functions into the Sierpinski Space $\\{0,1\\}$ with topology $\\{\\varnothing , \\{1\\}, \\{0,1\\}\\}$. Since a continuous function $X \\to \\{0,1\\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>\n" }, { "answer_id": 2101757, "score": 10, "text": "<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\\vert S\\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p>\n\n<p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of \"enough target spaces\". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\\le\\kappa+3$, where $\\vert X\\vert+\\aleph_0\\le \\kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p>\n\n<p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p>\n\n<p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\\{A\\subseteq S: A=f^{-1}(U)\\mbox{ for some $U$ open in the target space}\\}.$$ The discrepancy, of course, is due to the fact that $f\\circ f^{-1}(U)\\subseteq U$, but these sets are <em>not equal in general</em>.</p>\n" } ]
Question: Continuous functions define a topology? Answer 1: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 2: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 3: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 4: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 5: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 6: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 7: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 8: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 9: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 10: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 11: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 12: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 13: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 14: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 15: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 16: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 17: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 18: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 19: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 20: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 21: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 22: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 23: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 24: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 25: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 26: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 27: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 28: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 29: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 30: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 31: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 32: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 33: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 34: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 35: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 36: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 37: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 38: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 39: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 40: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p> Answer 41: <p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p> Answer 42: <p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p>
Divide a square into rectangles where each occurs once in each orientation
2,101,811
12
https://math.stackexchange.com/questions/2101811/divide-a-square-into-rectangles-where-each-occurs-once-in-each-orientation
<p>A $26 \times 26$ square divides into different rectangles so that each occurs exactly twice in different orientations.</p> <p><a href="https://i.stack.imgur.com/8GfeS.jpg"><img src="https://i.stack.imgur.com/8GfeS.jpg" alt="twice used rectangles"></a></p> <p>I've also found a solution for the $10 \times 10$ square, but no others. Are there any other squares that can be divided into a finite number of rectangles so that each occurs exactly twice in different orientations? </p>
[ { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 2124264, "score": 2, "text": "<p>Let us consider the following tiling of $8 \\times 8$ square:<code>\n11111222\n11111222\n11111222\n55557222\n64497222\n64493333\n64493333\n64400088\n</code>\nEach digit indicates number of rectangle the cell belongs to.\nFrom 0 to 9 the sizes are: $3 \\times 1$, $3 \\times 5$, $5 \\times 3$, $4 \\times 2$, $2 \\times 4$, $4 \\times 1$, $1 \\times 4$, $1 \\times 2$, $2 \\times 1$, $1 \\times 3$.</p>\n" }, { "answer_id": 4847002, "score": 1, "text": "<p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p>\n<p><a href=\"https://i.stack.imgur.com/DcHEC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DcHEC.png\" alt=\"enter image description here\" /></a></p>\n" } ]
Question: Divide a square into rectangles where each occurs once in each orientation Answer 1: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 2: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 3: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 4: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 5: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 6: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 7: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 8: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 9: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 10: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 11: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 12: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 13: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 14: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 15: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 16: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 17: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 18: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 19: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 20: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 21: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 22: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 23: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 24: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 25: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 26: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 27: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 28: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 29: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 30: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 31: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 32: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 33: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 34: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 35: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 36: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 37: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 38: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 39: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 40: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p> Answer 41: <p>Let us consider the following tiling of $8 \times 8$ square:<code> 11111222 11111222 11111222 55557222 64497222 64493333 64493333 64400088 </code> Each digit indicates number of rectangle the cell belongs to. From 0 to 9 the sizes are: $3 \times 1$, $3 \times 5$, $5 \times 3$, $4 \times 2$, $2 \times 4$, $4 \times 1$, $1 \times 4$, $1 \times 2$, $2 \times 1$, $1 \times 3$.</p> Answer 42: <p>Here are all rectangles with area 2 through 13, both orientations of those that are not square, in a 17x17. Not quite what you requested since I left the squares in.</p> <p><a href="https://i.stack.imgur.com/DcHEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcHEC.png" alt="enter image description here" /></a></p>
How to find the sum of this infinite series: $\sum_{n=1}^{ \infty } \frac1n \cdot \frac{H_{n+2}}{n+2}$
2,104,031
12
https://math.stackexchange.com/questions/2104031/how-to-find-the-sum-of-this-infinite-series-sum-n-1-infty-frac1n-cdo
<p>How do I find this particular sum? $$\sum_{n=1}^{ \infty } \frac1n \cdot \frac{H_{n+2}}{n+2}$$</p> <p>where $H_n = \sum_{k=1}^{n}\frac1k$.</p> <p>This was given to me by a friend and I have absolutely no idea how to proceed as I have never done these questions before. If possible, please give a way out without using polylogarithmic functions or other non-elementary functions.</p>
[ { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" }, { "answer_id": 2104073, "score": 12, "text": "<blockquote>\n <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\\sum_{k=1}^\\infty \\frac1{k^2}=\\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>The Harmonic number, $H_{n+2}$, can be written as </p>\n\n<p>$$H_{n+2}=\\sum_{k=1}^{n+2}\\frac1k=\\frac{1}{n+2}+\\frac{1}{n+1}+\\sum_{k=1}^n \\frac1k$$</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{n=1}^\\infty\\frac{1}{n(n+2)}\\sum_{k=1}^{n+2}\\frac1k&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty \\frac1k \\sum_{n=k}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+2)}\\right)\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\sum_{k=1}^\\infty\\frac1{2k^2}+\\frac12\\\\\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac{1}{n(n+2)^2}+\\sum_{n=1}^\\infty\\frac{1}{n(n+1)(n+2)}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{red}{\\sum_{n=1}^\\infty\\left(\\frac{1}{4n}-\\frac{1}{4(n+1)}-\\frac{1}{2(n+2)^2}\\right)}\\\\\\\\\n&amp;+\\color{blue}{\\sum_{n=1}^\\infty\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{2(n+2)}-\\frac{1}{2(n+1)}\\right)}\\\\\\\\\n&amp;+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\color{blue}{1-\\frac{\\pi^2}{12}}+\\color{red}{\\frac14}+\\frac{\\pi^2}{12}+\\frac12\\\\\\\\\n&amp;=\\frac74\n\\end{align}$$</p>\n" }, { "answer_id": 2104733, "score": 9, "text": "<p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact\n\\begin{align}\n\\sum_{n=1}^\\infty\\frac{H_{n+2}}{n(n+2)}&amp;=\\frac12\\sum_{n=1}^\\infty H_{n+2}\\left(\\frac{1}{n}-\\frac1{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{1}{n}\\left(H_{n}+\\frac{1}{n+1}+\\frac{1}{n+2}\\right)-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\ \n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}+\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}-\\frac{H_{n+2}}{n+2}\\right)\\\\\\\\\n&amp;=\\frac{1}{2}\\sum_{n=1}^\\infty\\left(\\frac{H_n}{n}-\\frac{H_{n+2}}{n+2}\\right)+\\frac12\\sum_{n=1}^\\infty\\left(\\frac{1}{n(n+1)}+\\frac{1}{n(n+2)}\\right)\\\\\\\\\n&amp;=\\frac12\\left(H_1+\\frac{H_2}2\\right)+\\frac78\\\\\\\\\n&amp;=\\frac{7}{4}.\n\\end{align}</p>\n" }, { "answer_id": 2104049, "score": 8, "text": "<p>recall: $\\displaystyle H_a = \\int_0^1 \\frac{1-x^a}{1-x}\\,\\mathrm{d}x$, and integration by parts once we have\n$$\\int_0^1 x^{a-1} \\ln (1-x)\\,\\mathrm{d}x = -\\frac{H_a}{a}$$\nThus,\n\\begin{align*}&amp;\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+a}}{n(n+a)} = -\\sum\\limits_{n=1}^{\\infty} \\frac{1}{n}\\int_0^{1} x^{n+a-1} \\ln (1-x)\\,\\mathrm{d}x= \\int_0^{1} x^{a-1} \\ln^{2}(1-x)\\,\\mathrm{d}x\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\partial^2}{\\partial b^2}\\int_0^{1} x^{a-1}(1-x)^{b-1}\\,\\mathrm{d}x= \\lim\\limits_{b \\to 1} \\frac{\\partial^2}{\\partial b^2} \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\\\&amp;= \\lim\\limits_{b \\to 1}\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\left((\\psi_0(b) - \\psi_0(a+b))^2 + \\psi_1(b) - \\psi_1(a+b)\\right)\\\\&amp;= \\frac{1}{a}\\left((\\gamma + \\psi_0(a+1))^2 + \\frac{\\pi^2}{6} - \\psi_1(a+1)\\right)\\end{align*}\nHence put $a=2$ we get\n$$\\sum\\limits_{n=1}^{\\infty} \\frac{H_{n+2}}{n(n+2)} =\\frac{1}{2}\\left [ \\left ( \\gamma +\\underset{\\psi _{0}\\left ( 3 \\right )}{\\underbrace{\\frac{3}{2}-\\gamma}} \\right )^{2}+\\frac{\\pi ^{2}}{6}-\\underset{\\psi _{1}\\left ( 3 \\right )}{\\underbrace{\\left (\\frac{\\pi ^{2}}{6}-\\frac{5}{4} \\right )} }\\right ]=\\frac{7}{4}$$</p>\n" }, { "answer_id": 2104119, "score": 3, "text": "<p><em>This is not an answer but it is too long for comment.</em></p>\n\n<p>Just out of curiosity and playing with a CAS,\n$$S_p=\\sum_{n=1}^{ p } \\frac{H_{n+2}}{n(n+2)}=\\frac{-2 \\left(2 p^2+9 p+9\\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2)\n (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$</p>\n\n<p>Similarly, considering $$T_p^{(a)}=\\sum_{n=1}^{ p } \\frac{H_{n+a}}{n(n+a)}$$ we get similar forms \n$$T_p^{(1)}=\\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p>\n\n<p>$$T_p^{(3)}=\\frac{-18 \\left(3 p^3+24 p^2+59 p+44\\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563\n p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are\n$$T_p^{(1)}=2-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(2)}=\\frac{7}{4}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(3)}=\\frac{85}{54}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(4)}=\\frac{415}{288}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$\n$$T_p^{(5)}=\\frac{12019}{9000}-\\frac{\\log \\left(p\\right)+\\gamma\n +1}{p}+O\\left(\\frac{1}{p^2}\\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\\frac 1a\\left(\\frac{\\pi^2}6+\\left(H_a\\right){}^2-\\psi ^{(1)}(a+1)\\right)$$</p>\n" }, { "answer_id": 2104399, "score": 3, "text": "<p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p>\n\n<p>Consider \n$$\n\\sum_{n=1}^\\infty\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}\n$$\nwhere $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series:\n$$\n\\sum_{n=1}^k\\frac{1}{n}\\cdot\\frac{H_{n+2}}{n+2}.\n$$\nUsing partial fraction decomposition, we observe that \n$$\n\\frac{1}{n}\\cdot\\frac{1}{n+2}=\\frac{1}{2n}-\\frac{1}{2(n+2)}\n$$\nTherefore, the partial sum can be rewritten as\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{n=1}^k\\frac{H_{n+2}}{2(n+2)}\n$$\nUsing the change of variables $m=n+2$ on the second sum, we get\n$$\n\\sum_{n=1}^k\\left(\\frac{H_{n+2}}{2n}-\\frac{H_{n+2}}{2(n+2)}\\right)=\\sum_{n=1}^k\\frac{H_{n+2}}{2n}-\\sum_{m=3}^{k+2}\\frac{H_m}{2m}.\n$$\nWe can observe that these sums partially telescope; in fact, the expression equals\n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}+\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}-\\frac{H_{k+1}}{2(k+1)}-\\frac{H_{k+2}}{2(k+2)}.\n$$\nLet's deal with each part of the expression separately.</p>\n\n<p>Part 1: The first two terms can be evaluated directly. Namely, \n\\begin{align*}\nH_3&amp;=\\frac{11}{6}&amp;H_4&amp;=\\frac{25}{12}.\n\\end{align*}\nTherefore, \n$$\n\\frac{H_3}{2}+\\frac{H_4}{4}=\\frac{23}{16}.\n$$</p>\n\n<p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that \n$$\n\\int_2^{k+1}\\frac{1}{x}dx\\leq \\sum_{n=1}^k\\frac{1}{n}\\leq\\int_1^k\\frac{1}{x}dx.\n$$\nTherefore, we know that\n$$\n\\ln(k+1)-\\ln(2)\\leq H_k\\leq \\ln(k).\n$$\nApplying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get\n$$\n\\frac{\\ln(k+2)-\\ln(2)}{2(k+1)}+\\frac{\\ln(k+3)-\\ln(2)}{2(k+2)}\\leq\\frac{H_{k+1}}{2(k+1)}+\\frac{H_{k+2}}{2(k+2)}\\leq\\frac{\\ln(k+1)}{2(k+1)}+\\frac{\\ln(k+2)}{2(k+2)}\n$$\nThe fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p>\n\n<p>Part 2: We are now considering the sum\n$$\n\\sum_{n=3}^k\\frac{H_{n+2}-H_n}{2n}.\n$$\nObserve that $H_{n+2}-H_n=\\frac{1}{n+1}+\\frac{1}{n+2}$. Therefore, we must compute\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{n+1}\\cdot\\frac{1}{2n}+\\frac{1}{n+2}\\cdot\\frac{1}{2n}\\right).\n$$\nOnce again, we use partial fraction decomposition. Observe that\n\\begin{align*}\n\\frac{1}{n+1}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{2n}-\\frac{1}{2(n+1)}&amp;\n\\frac{1}{n+2}\\cdot\\frac{1}{2n}&amp;=\\frac{1}{4n}-\\frac{1}{4(n+2)}.\n\\end{align*}\nThis partial sum then becomes\n$$\n\\sum_{n=3}^k\\left(\\frac{1}{2n}-\\frac{1}{2(n+1)}+\\frac{1}{4n}-\\frac{1}{4(n+2)}\\right)=\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{n=3}^k\\frac{1}{2(n+1)}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{n=3}^k\\frac{1}{4(n+2)}.\n$$\nUsing the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get\n$$\n\\sum_{n=3}^k\\frac{1}{2n}-\\sum_{m=4}^{k+1}\\frac{1}{2m}+\\sum_{n=3}^k\\frac{1}{4n}-\\sum_{m=5}^{k+2}\\frac{1}{4m}.\n$$\nOnce again, these terms telescope to\n$$\n\\frac{1}{6}-\\frac{1}{2(k+1)}+\\frac{1}{12}+\\frac{1}{16}-\\frac{1}{4(k+1)}-\\frac{1}{4(k+2)}.\n$$\nAll of the terms with $k$ in the denominator will vanish as $k$ increases, leaving\n$$\n\\frac{1}{6}+\\frac{1}{12}+\\frac{1}{16}=\\frac{5}{16}.\n$$</p>\n\n<p>Adding everything up, we get\n$$\n\\frac{23}{16}+\\frac{5}{16}+0=\\frac{7}{4}.\n$$</p>\n" } ]
Question: How to find the sum of this infinite series: $\sum_{n=1}^{ \infty } \frac1n \cdot \frac{H_{n+2}}{n+2}$ Answer 1: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 2: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 3: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 4: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 5: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 6: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 7: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 8: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 9: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 10: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 11: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 12: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 13: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 14: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 15: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 16: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 17: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 18: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 19: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 20: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 21: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 22: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 23: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 24: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 25: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 26: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 27: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 28: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 29: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 30: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 31: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 32: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 33: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 34: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 35: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 36: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 37: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 38: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 39: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 40: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 41: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 42: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 43: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 44: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 45: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 46: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 47: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 48: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 49: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 50: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 51: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 52: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 53: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 54: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 55: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 56: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 57: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 58: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 59: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 60: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 61: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 62: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 63: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 64: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 65: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 66: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 67: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 68: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 69: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 70: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 71: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 72: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 73: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 74: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 75: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 76: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 77: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 78: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 79: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 80: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 81: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 82: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 83: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 84: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 85: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 86: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 87: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 88: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 89: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 90: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 91: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 92: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 93: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 94: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 95: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 96: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 97: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 98: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 99: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 100: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p> Answer 101: <blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary analysis, including knowledge of the sum $\sum_{k=1}^\infty \frac1{k^2}=\pi^2/6$, partial fraction expansion, and telescoping series. It is to that end we proceed.</strong></p> </blockquote> <hr> <p>The Harmonic number, $H_{n+2}$, can be written as </p> <p>$$H_{n+2}=\sum_{k=1}^{n+2}\frac1k=\frac{1}{n+2}+\frac{1}{n+1}+\sum_{k=1}^n \frac1k$$</p> <p>Therefore, we have</p> <p>$$\begin{align} \sum_{n=1}^\infty\frac{1}{n(n+2)}\sum_{k=1}^{n+2}\frac1k&amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty \frac1k \sum_{n=k}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\sum_{k=1}^\infty\frac1{2k^2}+\frac12\\\\ &amp;=\sum_{n=1}^\infty\frac{1}{n(n+2)^2}+\sum_{n=1}^\infty\frac{1}{n(n+1)(n+2)}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{red}{\sum_{n=1}^\infty\left(\frac{1}{4n}-\frac{1}{4(n+1)}-\frac{1}{2(n+2)^2}\right)}\\\\ &amp;+\color{blue}{\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{2(n+2)}-\frac{1}{2(n+1)}\right)}\\\\ &amp;+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\color{blue}{1-\frac{\pi^2}{12}}+\color{red}{\frac14}+\frac{\pi^2}{12}+\frac12\\\\ &amp;=\frac74 \end{align}$$</p> Answer 102: <p>Actually the calculation for this sum is very simple and what we need is the sum of telescopic series. In fact \begin{align} \sum_{n=1}^\infty\frac{H_{n+2}}{n(n+2)}&amp;=\frac12\sum_{n=1}^\infty H_{n+2}\left(\frac{1}{n}-\frac1{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{1}{n}\left(H_{n}+\frac{1}{n+1}+\frac{1}{n+2}\right)-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+2)}-\frac{H_{n+2}}{n+2}\right)\\\\ &amp;=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{H_n}{n}-\frac{H_{n+2}}{n+2}\right)+\frac12\sum_{n=1}^\infty\left(\frac{1}{n(n+1)}+\frac{1}{n(n+2)}\right)\\\\ &amp;=\frac12\left(H_1+\frac{H_2}2\right)+\frac78\\\\ &amp;=\frac{7}{4}. \end{align}</p> Answer 103: <p>recall: $\displaystyle H_a = \int_0^1 \frac{1-x^a}{1-x}\,\mathrm{d}x$, and integration by parts once we have $$\int_0^1 x^{a-1} \ln (1-x)\,\mathrm{d}x = -\frac{H_a}{a}$$ Thus, \begin{align*}&amp;\sum\limits_{n=1}^{\infty} \frac{H_{n+a}}{n(n+a)} = -\sum\limits_{n=1}^{\infty} \frac{1}{n}\int_0^{1} x^{n+a-1} \ln (1-x)\,\mathrm{d}x= \int_0^{1} x^{a-1} \ln^{2}(1-x)\,\mathrm{d}x\\&amp;= \lim\limits_{b \to 1}\frac{\partial^2}{\partial b^2}\int_0^{1} x^{a-1}(1-x)^{b-1}\,\mathrm{d}x= \lim\limits_{b \to 1} \frac{\partial^2}{\partial b^2} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\\&amp;= \lim\limits_{b \to 1}\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\left((\psi_0(b) - \psi_0(a+b))^2 + \psi_1(b) - \psi_1(a+b)\right)\\&amp;= \frac{1}{a}\left((\gamma + \psi_0(a+1))^2 + \frac{\pi^2}{6} - \psi_1(a+1)\right)\end{align*} Hence put $a=2$ we get $$\sum\limits_{n=1}^{\infty} \frac{H_{n+2}}{n(n+2)} =\frac{1}{2}\left [ \left ( \gamma +\underset{\psi _{0}\left ( 3 \right )}{\underbrace{\frac{3}{2}-\gamma}} \right )^{2}+\frac{\pi ^{2}}{6}-\underset{\psi _{1}\left ( 3 \right )}{\underbrace{\left (\frac{\pi ^{2}}{6}-\frac{5}{4} \right )} }\right ]=\frac{7}{4}$$</p> Answer 104: <p><em>This is not an answer but it is too long for comment.</em></p> <p>Just out of curiosity and playing with a CAS, $$S_p=\sum_{n=1}^{ p } \frac{H_{n+2}}{n(n+2)}=\frac{-2 \left(2 p^2+9 p+9\right) H_{p+3}+(7 p^3+38 p^2+64 p+33)}{4 (p+1) (p+2) (p+3)}$$ Using the asymptotics of harmonic numbers $$S_p=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$</p> <p>Similarly, considering $$T_p^{(a)}=\sum_{n=1}^{ p } \frac{H_{n+a}}{n(n+a)}$$ we get similar forms $$T_p^{(1)}=\frac{-(p+2) H_{p+2}+(2 p^2+5 p+3)}{(p+1) (p+2)}$$</p> <p>$$T_p^{(3)}=\frac{-18 \left(3 p^3+24 p^2+59 p+44\right) H_{p+4}+(85 p^4+796 p^3+2624 p^2+3563 p+1650)}{54 (p+1) (p+2) (p+3) (p+4)}$$ for which the asymptotics are $$T_p^{(1)}=2-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(2)}=\frac{7}{4}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(3)}=\frac{85}{54}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(4)}=\frac{415}{288}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ $$T_p^{(5)}=\frac{12019}{9000}-\frac{\log \left(p\right)+\gamma +1}{p}+O\left(\frac{1}{p^2}\right)$$ For sure, all limits correspond to Renascence_5's answer which can also write $$\frac 1a\left(\frac{\pi^2}6+\left(H_a\right){}^2-\psi ^{(1)}(a+1)\right)$$</p> Answer 105: <p>There is an entirely elementary approach to the computation, using <em>only</em> partial fractions and the integral test.</p> <p>Consider $$ \sum_{n=1}^\infty\frac{1}{n}\cdot\frac{H_{n+2}}{n+2} $$ where $H_{n+2}$ is the $(n+2)$-th partial sum of the harmonic series. Consider the $k$-th partial sum of the given series: $$ \sum_{n=1}^k\frac{1}{n}\cdot\frac{H_{n+2}}{n+2}. $$ Using partial fraction decomposition, we observe that $$ \frac{1}{n}\cdot\frac{1}{n+2}=\frac{1}{2n}-\frac{1}{2(n+2)} $$ Therefore, the partial sum can be rewritten as $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{n=1}^k\frac{H_{n+2}}{2(n+2)} $$ Using the change of variables $m=n+2$ on the second sum, we get $$ \sum_{n=1}^k\left(\frac{H_{n+2}}{2n}-\frac{H_{n+2}}{2(n+2)}\right)=\sum_{n=1}^k\frac{H_{n+2}}{2n}-\sum_{m=3}^{k+2}\frac{H_m}{2m}. $$ We can observe that these sums partially telescope; in fact, the expression equals $$ \frac{H_3}{2}+\frac{H_4}{4}+\sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}-\frac{H_{k+1}}{2(k+1)}-\frac{H_{k+2}}{2(k+2)}. $$ Let's deal with each part of the expression separately.</p> <p>Part 1: The first two terms can be evaluated directly. Namely, \begin{align*} H_3&amp;=\frac{11}{6}&amp;H_4&amp;=\frac{25}{12}. \end{align*} Therefore, $$ \frac{H_3}{2}+\frac{H_4}{4}=\frac{23}{16}. $$</p> <p>Part 3: Using the integral test, we can bound the value of $H_{k+1}$ and $H_{k+2}$. Recall that for partial sums, the integral test's proof tells us that $$ \int_2^{k+1}\frac{1}{x}dx\leq \sum_{n=1}^k\frac{1}{n}\leq\int_1^k\frac{1}{x}dx. $$ Therefore, we know that $$ \ln(k+1)-\ln(2)\leq H_k\leq \ln(k). $$ Applying these inequalities in our case, we have to change $k$ to $k+1$ and $k+2$. Either way, however, we get $$ \frac{\ln(k+2)-\ln(2)}{2(k+1)}+\frac{\ln(k+3)-\ln(2)}{2(k+2)}\leq\frac{H_{k+1}}{2(k+1)}+\frac{H_{k+2}}{2(k+2)}\leq\frac{\ln(k+1)}{2(k+1)}+\frac{\ln(k+2)}{2(k+2)} $$ The fractions on the left and right both go to zero as $k$ increases because they are logarithms over linear terms and linear terms grow faster. Use l'Hopital's rule once to check, if you need further justification. Therefore, these terms can be ignored in the final sum.</p> <p>Part 2: We are now considering the sum $$ \sum_{n=3}^k\frac{H_{n+2}-H_n}{2n}. $$ Observe that $H_{n+2}-H_n=\frac{1}{n+1}+\frac{1}{n+2}$. Therefore, we must compute $$ \sum_{n=3}^k\left(\frac{1}{n+1}\cdot\frac{1}{2n}+\frac{1}{n+2}\cdot\frac{1}{2n}\right). $$ Once again, we use partial fraction decomposition. Observe that \begin{align*} \frac{1}{n+1}\cdot\frac{1}{2n}&amp;=\frac{1}{2n}-\frac{1}{2(n+1)}&amp; \frac{1}{n+2}\cdot\frac{1}{2n}&amp;=\frac{1}{4n}-\frac{1}{4(n+2)}. \end{align*} This partial sum then becomes $$ \sum_{n=3}^k\left(\frac{1}{2n}-\frac{1}{2(n+1)}+\frac{1}{4n}-\frac{1}{4(n+2)}\right)=\sum_{n=3}^k\frac{1}{2n}-\sum_{n=3}^k\frac{1}{2(n+1)}+\sum_{n=3}^k\frac{1}{4n}-\sum_{n=3}^k\frac{1}{4(n+2)}. $$ Using the change of variables $m=n+1$ in the second sum and $m=n+2$ in the fourth sum, we get $$ \sum_{n=3}^k\frac{1}{2n}-\sum_{m=4}^{k+1}\frac{1}{2m}+\sum_{n=3}^k\frac{1}{4n}-\sum_{m=5}^{k+2}\frac{1}{4m}. $$ Once again, these terms telescope to $$ \frac{1}{6}-\frac{1}{2(k+1)}+\frac{1}{12}+\frac{1}{16}-\frac{1}{4(k+1)}-\frac{1}{4(k+2)}. $$ All of the terms with $k$ in the denominator will vanish as $k$ increases, leaving $$ \frac{1}{6}+\frac{1}{12}+\frac{1}{16}=\frac{5}{16}. $$</p> <p>Adding everything up, we get $$ \frac{23}{16}+\frac{5}{16}+0=\frac{7}{4}. $$</p>
Different ways to come up with $1+2+3+\cdots +n=\frac{n(n+1)}{2}$
2,106,689
12
https://math.stackexchange.com/questions/2106689/different-ways-to-come-up-with-123-cdots-n-fracnn12
<p>I am trying to compile a list of the different ways to come up with the closed form for the sum in the title. So far I have the famous Gauss story, the argument by counting doubletons, and using generating functions. Is there any other (significantly) different way to discover the closed form?</p> <p>To clarify, Gauss's method was to take $1+2+3+\dots+n$, write it backwards, and add it up.</p>
[ { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" }, { "answer_id": 2106705, "score": 15, "text": "<p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p>\n\n<p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p>\n\n<p>$\\text{average} = \\frac{n+1}{2}$</p>\n\n<p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p>\n\n<p>$\\text{average} = \\frac{1+2+3+4+...+n}{n}$</p>\n\n<p>Equating these two:</p>\n\n<p>$\\frac{1+2+3+4+...+n}{n} = \\frac{n+1}{2}$</p>\n\n<p>And therefore</p>\n\n<p>$1+2+3+4+...+n = n\\frac{n+1}{2}$</p>\n" }, { "answer_id": 2106699, "score": 11, "text": "<p>One significantly different method is to bulldozer through using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>:</p>\n\n<p>$$\\sum_{k=1}^nk=\\int_0^nx\\ dx+\\frac12n+c=\\frac12n^2+\\frac12x+c$$</p>\n\n<p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p>\n\n<p>$$\\sum_{k=1}^nk=\\frac12n^2+\\frac12n$$</p>\n\n<hr>\n\n<p>One can also try the hockey-stick identity:</p>\n\n<p>$$\\sum_{k=1}^nk=\\sum_{k=1}^n\\binom k1=\\binom{n+1}2=\\frac{n(n+1)}{2!}$$</p>\n\n<p><a href=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://www.mathsisfun.com/images/pascals-triangle-2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p>\n\n<hr>\n\n<p>Many more methods to computing this sum may be found in one of my favorite questions:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula\">Methods to compute $\\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p>\n" }, { "answer_id": 2106739, "score": 8, "text": "<p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href=\"https://math.stackexchange.com/q/2035146\">this post</a>:</p>\n\n<p>If you knew of the geometric series, you would know that</p>\n\n<p>$$\\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\\dots+r^n$$</p>\n\n<p>If we differentiate both sides, we have</p>\n\n<p>$$\\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\\dots+nr^{n-1}$$</p>\n\n<p>Letting $r\\to1$ and applying L'Hospital's rule on the fraction, we end up with</p>\n\n<p>$$\\frac{n(n+1)}2=1+2+3+\\dots+n$$</p>\n" }, { "answer_id": 2106704, "score": 2, "text": "<p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \\sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \\frac{n(n + 1)}{2}$.</p>\n" }, { "answer_id": 2106745, "score": 2, "text": "<p>Since $(n+1)^2-n^2=2n+1$, we obtain:\n$$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$\nwhich gives the answer.</p>\n" }, { "answer_id": 2106747, "score": 2, "text": "<p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p>\n\n<p>Starting with $$1+2+3+4+5+\\cdots+n\\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\\cdots\\&amp;\\text c$. Hence$$1+2+3+4+5+\\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\\cdots\\&amp;\\text c\\tag2$$\nTherefore, we also have$$1+2+3+4+\\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\\cdots\\&amp;\\text c\\tag3$$\nSubtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\\tag4$$And equating coefficients, we have$$2C=1\\\\B+C=1\\\\\\implies C=B=\\dfrac 12,\\ A=0$$\nTherefore$$1+2+3+4+\\cdots n=0+\\dfrac 12n+\\dfrac 12n^2=\\dfrac {n^2+n}2\\tag5$$</p>\n" }, { "answer_id": 2106783, "score": 2, "text": "<p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p>\n\n<p>$$2(1+2+3+...n)=n^2+n$$</p>\n\n<p>$$1+2+3+...n=\\frac{n(n+1)}{2}$$</p>\n\n<p>This figure aims to illustrate the argument. </p>\n\n<p><a href=\"https://i.stack.imgur.com/2xhpJ.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2xhpJ.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p>\n\n<p>$$1+2+3+....n=\\frac{n^2}{2}+n\\frac{1(1)}{2}$$</p>\n\n<p>$$=\\frac{n(n+1)}{2}$$</p>\n" }, { "answer_id": 2106824, "score": 2, "text": "<p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p>\n\n<p><a href=\"https://i.stack.imgur.com/wnMqz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/wnMqz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2107008, "score": 2, "text": "<p>Note that we have the elementary sum</p>\n\n<p>$$\\sum_{\\ell=1}^m(1)=m \\tag 1$$</p>\n\n<p>Hence, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{m=1}^n m&amp;=\\sum_{m=1}^n \\sum_{\\ell=1}^m(1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n\\sum_{m=\\ell}^n (1)\\\\\\\\\n&amp;=\\sum_{\\ell=1}^n \\left(n-\\ell+1\\right)\\\\\\\\\n&amp;=n(n+1)-\\sum_{\\ell=1}^n \\ell\\\\\\\\\n&amp;=n(n+1)-\\sum_{m=1}^n m\\\\\\\\\n2\\sum_{m=1}^n m&amp;=n(n+1)\\\\\\\\\n\\sum_{m=1}^n m&amp;=\\frac{n(n+1)}{2}\n\\end{align}$$</p>\n\n<p>as was to be shown.</p>\n" }, { "answer_id": 2106736, "score": 1, "text": "<p>$f(x)=ax^2+bx+c$</p>\n\n<p>$$f(0)=c=0$$</p>\n\n<p>$$f(1)=a+b=1$$ \n$$f(2)=4a+2b=2+1=3$$</p>\n\n<p>we have </p>\n\n<p>$$4a+2b-2(a+b)=2a=3 -2=1$$\n$$2a=1 \\rightarrow a=\\frac{1}{2}$$\n$$b=\\frac{1}{2} $$\n$$f(x)=\\frac{x^2+x}{2}$$</p>\n" }, { "answer_id": 2106837, "score": 1, "text": "<p>Bearing in mind you asked for <em>different</em> ways:</p>\n\n<p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p>\n\n<p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p>\n\n<p>So $\\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p>\n\n<p>So $\\sum_{i=1}^m (2i) = \\sum_{i=1}^m(2i-1) + \\sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p>\n\n<p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p>\n\n<p>So $\\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p>\n\n<p>$\\sum_{k=1}^n k= \\frac n2(n+1)=\\frac {n(n+1)}2$. Which proves it if $n$ is even.</p>\n\n<p>If $n = 2m-1$ is odd then</p>\n\n<p>$\\sum_{k = 1}^n k =\\sum_{k=1}^{2m-1}k = \\sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p>\n\n<p>Substituting $n = 2m-1$ we have $m = \\frac{n+1}2$ and so $m(2m-1) = \\frac{n+1}2n=\\frac {n(n+1)}2$</p>\n\n<p>That's different.</p>\n" }, { "answer_id": 2107029, "score": 1, "text": "<p>One way I came up with is distributing the sum around a middle number, like this:\n$$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$\nWe can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\\dfrac{n(n+1)}{2}$$</p>\n\n<p>For even number of terms, we can use this formula plus another number, which is even.</p>\n" }, { "answer_id": 2108073, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 1}^{n}k &amp; = \\sum_{k = 1}^{n}k^{\\underline{1}} =\n\\left.{1 \\over 2}\\,k^{\\underline{2}}\\,\\right\\vert_{1}^{n + 1} =\n{1 \\over 2}\\,\\pars{n +1}^{\\,\\underline{2}}\\ -\\ {1 \\over 2}\\,1^{\\underline{2}} =\n{1 \\over 2}\\,\\pars{n + 1}n - {1 \\over 2}\\,1\\times 0 =\n\\bbx{\\ds{n\\pars{n + 1} \\over 2}}\n\\end{align}</p>\n" } ]
Question: Different ways to come up with $1+2+3+\cdots +n=\frac{n(n+1)}{2}$ Answer 1: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 2: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 3: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 4: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 5: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 6: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 7: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 8: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 9: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 10: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 11: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 12: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 13: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 14: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 15: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 16: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 17: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 18: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 19: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 20: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 21: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 22: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 23: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 24: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 25: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 26: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 27: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 28: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 29: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 30: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 31: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 32: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 33: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 34: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 35: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 36: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 37: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 38: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 39: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 40: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 41: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 42: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 43: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 44: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 45: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 46: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 47: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 48: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 49: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 50: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 51: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 52: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 53: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 54: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 55: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 56: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 57: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 58: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 59: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 60: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 61: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 62: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 63: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 64: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 65: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 66: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 67: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 68: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 69: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 70: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 71: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 72: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 73: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 74: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 75: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 76: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 77: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 78: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 79: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 80: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 81: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 82: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 83: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 84: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 85: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 86: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 87: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 88: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 89: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 90: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 91: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 92: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 93: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 94: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 95: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 96: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 97: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 98: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 99: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 100: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 101: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 102: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 103: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 104: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 105: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 106: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 107: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 108: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 109: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 110: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 111: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 112: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 113: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 114: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 115: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 116: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 117: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 118: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 119: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 120: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 121: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 122: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 123: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 124: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 125: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 126: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 127: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 128: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 129: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 130: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 131: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 132: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 133: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 134: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 135: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 136: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 137: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 138: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 139: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 140: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 141: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 142: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 143: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 144: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 145: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 146: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 147: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 148: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 149: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 150: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 151: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 152: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 153: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 154: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 155: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 156: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 157: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 158: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 159: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 160: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 161: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 162: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 163: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 164: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 165: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 166: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 167: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 168: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 169: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 170: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 171: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 172: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 173: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 174: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 175: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 176: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 177: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 178: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 179: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 180: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 181: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 182: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 183: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 184: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 185: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 186: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 187: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 188: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 189: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 190: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 191: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 192: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 193: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 194: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 195: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 196: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 197: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 198: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 199: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 200: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 201: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 202: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 203: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 204: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 205: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 206: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 207: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 208: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 209: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 210: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 211: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 212: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 213: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 214: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 215: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 216: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 217: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 218: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 219: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 220: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 221: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 222: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 223: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 224: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 225: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 226: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 227: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 228: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 229: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 230: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 231: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 232: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 233: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 234: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 235: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 236: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 237: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 238: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 239: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 240: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 241: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 242: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 243: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 244: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 245: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 246: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 247: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p> Answer 248: <p>You have an education tag so perhaps you're looking for a less rigorous proof which might just make sense to students.</p> <p>If you have the consecutive numbers $1,2,3,4,...,n$ in order then the average of the numbers is in the middle, that's the smallest number plus the largest divided by $2$. </p> <p>$\text{average} = \frac{n+1}{2}$</p> <p>Of course the main definition of the average is the sum of all the numbers divided by how many numbers there are. </p> <p>$\text{average} = \frac{1+2+3+4+...+n}{n}$</p> <p>Equating these two:</p> <p>$\frac{1+2+3+4+...+n}{n} = \frac{n+1}{2}$</p> <p>And therefore</p> <p>$1+2+3+4+...+n = n\frac{n+1}{2}$</p> Answer 249: <p>One significantly different method is to bulldozer through using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>:</p> <p>$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n+c=\frac12n^2+\frac12x+c$$</p> <p>for some constant $c$. Plugging in $n=1$ reveals that $c=0$, so</p> <p>$$\sum_{k=1}^nk=\frac12n^2+\frac12n$$</p> <hr> <p>One can also try the hockey-stick identity:</p> <p>$$\sum_{k=1}^nk=\sum_{k=1}^n\binom k1=\binom{n+1}2=\frac{n(n+1)}{2!}$$</p> <p><a href="https://www.mathsisfun.com/images/pascals-triangle-2.gif" rel="nofollow noreferrer"><img src="https://www.mathsisfun.com/images/pascals-triangle-2.gif" alt="enter image description here"></a></p> <p>(poor ones get a lame name) As you can see, each diagonal is the sum of the previous due to how Pascal's triangle is made.</p> <hr> <p>Many more methods to computing this sum may be found in one of my favorite questions:</p> <p><a href="https://math.stackexchange.com/questions/2035188/methods-to-compute-sum-k-1nkp-without-faulhabers-formula">Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula</a></p> Answer 250: <p>I want to add this proof because is very interesting, but I want to be clear stating that the merit is of the user @SimplyBeautifulArt in <a href="https://math.stackexchange.com/q/2035146">this post</a>:</p> <p>If you knew of the geometric series, you would know that</p> <p>$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$</p> <p>If we differentiate both sides, we have</p> <p>$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$</p> <p>Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with</p> <p>$$\frac{n(n+1)}2=1+2+3+\dots+n$$</p> Answer 251: <p>The difference equation $f(n) - f(n - 1) = n, f(0) = 0$ gives $f(n) = \sum_{x=1}^n x$. Solving that difference equation is pretty easy, and it shows that $f(n) = \frac{n(n + 1)}{2}$.</p> Answer 252: <p>Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.</p> Answer 253: <p>Here is one with the use of <em>Indeterminant Coefficients</em>:</p> <p>Starting with $$1+2+3+4+5+\cdots+n\tag1$$Assume it to be equal to $A+Bn+Cn^2+Dn^3+\cdots\&amp;\text c$. Hence$$1+2+3+4+5+\cdots+n=A+Bn+Cn^2+Dn^3+En^4+\cdots\&amp;\text c\tag2$$ Therefore, we also have$$1+2+3+4+\cdots+n+1=A+B(n+1)+C(n+1)^2+D(n+1)^3+\cdots\&amp;\text c\tag3$$ Subtracting $(2)$ from $(3)$, we get$$n+1=B+C(2n+1)\tag4$$And equating coefficients, we have$$2C=1\\B+C=1\\\implies C=B=\dfrac 12,\ A=0$$ Therefore$$1+2+3+4+\cdots n=0+\dfrac 12n+\dfrac 12n^2=\dfrac {n^2+n}2\tag5$$</p> Answer 254: <p>Consider $1+2+3+...n$. Add $1+2+3+...n$ to it. This counts the number of unit squares in an $n$ by $n$ square, but double counts $n$. So, </p> <p>$$2(1+2+3+...n)=n^2+n$$</p> <p>$$1+2+3+...n=\frac{n(n+1)}{2}$$</p> <p>This figure aims to illustrate the argument. </p> <p><a href="https://i.stack.imgur.com/2xhpJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xhpJ.jpg" alt="enter image description here"></a></p> <p>Another way is by using area. Ignore the upper white area in the figure above. Then the area of the shape can be divided into an $n$ by $n$ right triangle and $n$, $1$ by $1$ right triangles so the total area is,</p> <p>$$1+2+3+....n=\frac{n^2}{2}+n\frac{1(1)}{2}$$</p> <p>$$=\frac{n(n+1)}{2}$$</p> Answer 255: <p>This proof without words was entered and then deleted by @user170231, so I'm marking my answer community wiki. I don't think it counts as a duplicate of Gauss's proof.</p> <p><a href="https://i.stack.imgur.com/wnMqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wnMqz.png" alt="enter image description here"></a></p> Answer 256: <p>Note that we have the elementary sum</p> <p>$$\sum_{\ell=1}^m(1)=m \tag 1$$</p> <p>Hence, we have</p> <p>$$\begin{align} \sum_{m=1}^n m&amp;=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &amp;=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &amp;=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &amp;=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &amp;=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&amp;=n(n+1)\\\\ \sum_{m=1}^n m&amp;=\frac{n(n+1)}{2} \end{align}$$</p> <p>as was to be shown.</p> Answer 257: <p>$f(x)=ax^2+bx+c$</p> <p>$$f(0)=c=0$$</p> <p>$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$</p> <p>we have </p> <p>$$4a+2b-2(a+b)=2a=3 -2=1$$ $$2a=1 \rightarrow a=\frac{1}{2}$$ $$b=\frac{1}{2} $$ $$f(x)=\frac{x^2+x}{2}$$</p> Answer 258: <p>Bearing in mind you asked for <em>different</em> ways:</p> <p>$(n+1)^2 = n^2 + (2n + 1)$. $2n+1$ is the $n+1$th odd number. (the first odd number is $1 = 2*1 -1$; the second odd number is $3=2*2 -1$ and so on). From that we are inspired to and can easily verify that $m^2 = (m-1)^2 + (2m-1)$. So we can say with confidence that for any positive $m$ that $m^2$ is $(m-1)^2$ plus the $m$ th odd number. </p> <p>Recursively that means $m^2$ is the sum of the first $m$ odd numbers. </p> <p>So $\sum_{i=1}^m (2i -1) = m^2$ This the sum of the first $m$ odd numbers.</p> <p>So $\sum_{i=1}^m (2i) = \sum_{i=1}^m(2i-1) + \sum_{i=1}^m 1= m^2 + m$. This is the sum of the first $m$ even numbers.</p> <p>So $m^2 + m^2 + m$ is the sum of the first $m$ odd numbers plus the first $m$ even numbers. Or in other words the first $2m$ numbers.</p> <p>So $\sum_{k = 1}^{2m}k = 2m^2 + m = m(2m + 1)$. Replace $2m$ with $n$ and you get</p> <p>$\sum_{k=1}^n k= \frac n2(n+1)=\frac {n(n+1)}2$. Which proves it if $n$ is even.</p> <p>If $n = 2m-1$ is odd then</p> <p>$\sum_{k = 1}^n k =\sum_{k=1}^{2m-1}k = \sum_{k=1}^{2m}k - 2m = m(2m+1)- 2m = m(2m - 1)$.</p> <p>Substituting $n = 2m-1$ we have $m = \frac{n+1}2$ and so $m(2m-1) = \frac{n+1}2n=\frac {n(n+1)}2$</p> <p>That's different.</p> Answer 259: <p>One way I came up with is distributing the sum around a middle number, like this: $$1+2+3+4+5+6+7=(4-3)+(4-2)+(4-1)+4+(4+1)+(4+2)+(4+3)$$ We can do this for all sums which have odd number of terms. So, we get a formula for the sum, where $n$ is an odd number: $$1+2+3...+n=\dfrac{n(n+1)}{2}$$</p> <p>For even number of terms, we can use this formula plus another number, which is even.</p> Answer 260: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 1}^{n}k &amp; = \sum_{k = 1}^{n}k^{\underline{1}} = \left.{1 \over 2}\,k^{\underline{2}}\,\right\vert_{1}^{n + 1} = {1 \over 2}\,\pars{n +1}^{\,\underline{2}}\ -\ {1 \over 2}\,1^{\underline{2}} = {1 \over 2}\,\pars{n + 1}n - {1 \over 2}\,1\times 0 = \bbx{\ds{n\pars{n + 1} \over 2}} \end{align}</p>
Prove known closed form for $\int_0^\infty e^{-x}I_0\left(\frac{x}{3}\right)^3\;dx$
2,109,895
12
https://math.stackexchange.com/questions/2109895/prove-known-closed-form-for-int-0-infty-e-xi-0-left-fracx3-right3
<p>I know that the following identity is correct, but I would love to see a derivation:</p> <p>$$\int_0^\infty e^{-x}I_0\left(\frac{x}{3}\right)^3\;dx=\frac{\sqrt{6}}{32\pi^3}\Gamma\left(\frac{1}{24}\right)\Gamma\left(\frac{5}{24}\right)\Gamma\left(\frac{7}{24}\right)\Gamma\left(\frac{11}{24}\right)\tag{1}$$</p> <p>where $I_0(x)$ is a <a href="https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I.CE.B1_.2C_K.CE.B1">modified</a> Bessel function of the <a href="https://en.wikipedia.org/wiki/Bessel_function#Bessel_functions_of_the_first_kind:_J.CE.B1">first kind</a>. The form of the RHS makes me assume that the <a href="https://en.wikipedia.org/wiki/Elliptic_integral#Complete_elliptic_integral_of_the_first_kind">elliptic integral</a> <a href="http://mathworld.wolfram.com/EllipticIntegralSingularValue.html">singular value</a> $K(k_6)$ is somehow involved but I see no obvious way of transforming the LHS into an elliptic integral. My request is to see a way of evaluating this integral <em>directly</em> to show that it equals the RHS.</p> <hr> <p><strong>Background</strong>: This integral arises in the theory of the <a href="https://math.dartmouth.edu/~pw/math100w13/mare.pdf">recurrence</a> of <a href="https://en.wikipedia.org/wiki/Random_walk">random walks</a> on a <a href="https://en.wikipedia.org/wiki/Integer_lattice">cubic lattice</a>. Jonathan Novak elegantly shows <a href="http://math.mit.edu/classes/18.095/lect2/notes.pdf">here</a> using <a href="https://en.wikipedia.org/wiki/Generating_function">generating functions</a> that the recurrence probability of a random walk on $\mathbb{Z}^d$ is $p(d)=1-\frac{1}{u(d)}$ where:</p> <p>$$u(d)=\int_0^\infty e^{-x}I_0\left(\frac{x}{d}\right)^d\;dx\tag{2}$$</p> <p>The <a href="http://mathworld.wolfram.com/PolyasRandomWalkConstants.html">MathWorld</a> page lists several ways of writing $u(3)$, including $(1)$, but in none of the references could I find a specific derivation of $(1)$. Most relevant discussions I found (e.g. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC431006/pdf/pnas00027-0042.pdf">here</a> and <a href="http://mathworld.wolfram.com/WatsonsTripleIntegrals.html">here</a> where <a href="http://mathworld.wolfram.com/CompleteEllipticIntegraloftheFirstKind.html">elliptic integrals</a> do appear) work from an alternative way of writing $u(3)$, namely:</p> <p>$$u(3)=\frac{3}{2\pi^3}\int_{-\pi}^\pi\int_{-\pi}^\pi\int_{-\pi}^\pi\frac{dx\;dy\;dz}{3-\cos{x}-\cos{y}-\cos{z}}\tag{3}$$</p> <p>rather than a representation like $(2)$. I believe $(3)$ comes from an alternative way of solving the random walk problem (e.g. see <a href="https://www.jstor.org/stable/pdf/2098789.pdf">here</a>) which I am not very familiar with. Accordingly, I am asking for a derivation of $(1)$ that works directly from the Bessel function integral, rather than considering other ways of calculating $u(3)$ (deriving the RHS of $(1)$ from $(3)$ would <em>not</em> answer my question). Although I appreciate that there may be simpler proofs of $(1)$ from the theory of random walks, I am wondering if there is a nice way of evaluating the integral in $(1)$ directly.</p>
[ { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" }, { "answer_id": 2110344, "score": 4, "text": "<p>I think many identities are involved here, so let we outline a partial answer (for now). We have\n$$ I_0(z) = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2},\\qquad \\mathcal{L}(I_0(z))=\\frac{\\mathbb{1}_{s&gt;1}}{\\sqrt{s^2-1}}\\tag{1} $$\nand\n$$ I_0(z)^2 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}\\sum_{k=0}^{n}\\frac{1}{k!^2(n-k)!^2} = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n n!^2}\\sum_{k=0}^{n}\\binom{n}{k}^2=\\sum_{n\\geq 0}\\frac{z^{2n}\\binom{2n}{n}}{4^n n!^2}\\tag{2}$$\nso:\n$$ \\mathcal{L}(I_0(z)^2) =\\frac{2}{\\pi}\\,K\\!\\left(\\frac{2}{s}\\right)\\mathbb{1}_{s&gt;2}\\tag{3}$$\nand the connection with the complete elliptic integral of the first kind is less obscure. If we set\n$$ A_n = \\sum_{k=0}^{n}\\frac{\\binom{2k}{k}}{k!^2(n-k)!^2}\\tag{4}$$\nwe have:\n$$ I_0(z)^3 = \\sum_{n\\geq 0}\\frac{z^{2n}}{4^n}A_n \\tag{5} $$\nand\n$$\\begin{eqnarray*} \\int_{0}^{+\\infty}I_0(z)^3 e^{-3z}\\,dz = \\sum_{n\\geq 0}\\frac{(2n)!}{4^n 3^{2n+1}}\\,A_n&amp;=&amp;\\sum_{0\\leq k\\leq n}\\frac{(2n)!\\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\\\&amp;=&amp;\\sum_{k\\geq 0}\\frac{(2k)!}{k!^4}\\sum_{n\\geq k}\\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\\tag{6}\\end{eqnarray*}$$\nThe definitive connection with a singular value <em>should</em> now be a consequence of the identity\n$$ K(k)^2 = \\frac{\\pi^2}{4}\\sum_{n\\geq 0}\\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^3 (2kk')^{2n} \\tag{7}$$\nthat holds for any $0&lt;k&lt;\\frac{1}{\\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p>\n" }, { "answer_id": 2658521, "score": 2, "text": "<p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already.\n\\begin{eqnarray}\nu(3) &amp;=&amp; \\int\\limits_{[0,\\pi]^3} \\frac{1}{3-\\cos(t)-\\cos(u)-\\cos(v)} dt du dv\\\\\n&amp;=&amp;-\\pi \\int\\limits_{[0,\\pi]^2} \\frac{1}{\\sqrt{-4+\\cos(t)+\\cos(u)}\\sqrt{-2+\\cos(t)+\\cos(u)}} dt du\\\\\n&amp;\\underbrace{=}_{v=\\tan(u/2)}&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{(-3-5 v^2+(1+v^2) \\cos(t))(-1-3 v^2 +(1+v^2) \\cos(t))}}dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi] \\times [0,\\infty)} \\frac{1}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}}\\cdot \\frac{1}{\\sqrt{(1+v^2)(1+v^2 \\frac{(\\cos(t)-3)^2}{(\\cos(t)-5)(\\cos(t)-1)}}} dt dv\\\\\n&amp;=&amp; 2\\pi \\int\\limits_{[0,\\pi]} \\frac{K(-\\frac{4}{5-6 \\cos(t)+\\cos(t)^2})}{\\sqrt{5-6 \\cos(t)+\\cos(t)^2}} dt\\\\\n&amp;\\underbrace{=}_{v=\\tan(t/2)}&amp; 2\\pi \\int_{[0,\\infty)} \\frac{K(-\\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\\sqrt{2+3 v^2}} dv\\\\\n&amp;\\underbrace{=}_{z=-\\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\\\\n&amp;=&amp; \\frac{\\pi}{2} \\int\\limits_{(-\\infty,-1/3)} \\sqrt{\\frac{-1+3 z-3\\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \\frac{1}{\\sqrt{1-z}}K(\\frac{z}{z-1}) dz\\\\\n&amp;\\underbrace{=}_{y=z/(z-1)}&amp;\\frac{\\imath \\pi}{2} \\int\\limits_{(1/4,1)} \\frac{K(y)}{\\sqrt{y(1-3 \\sqrt{y}+2 y)}}dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp; \\imath \\pi \\int\\limits_{(1/2,1)} \\frac{K(y^2)}{\\sqrt{(1-y)(1-2 y)}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow 2y-1}&amp; \\frac{\\pi}{\\sqrt{2}} \\int\\limits_{(0,1)} \\frac{K(\\frac{(1+y)^2}{4})}{\\sqrt{(1-y)y}} dy\\\\\n&amp;\\underbrace{=}_{y\\leftarrow \\sqrt{y}}&amp;\\pi \\sqrt{2} \\int\\limits_{(0,1)}\\frac{K(\\frac{(1+y^2)^2}{4})}{\\sqrt{1-y^2}} dy\\\\\n&amp;\\underbrace{=}_{y=\\sin(\\phi)}&amp; \\int\\limits_{(0,\\pi/2)} K\\left( \\frac{(1+\\sin(\\phi)^2)^2}{4}\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{\\phi=\\phi/2}&amp; \\frac{\\pi \\sqrt{2}}{4} \\int\\limits_{[-\\pi,\\pi]} K\\left( \\frac{19}{32} - \\frac{3}{8} \\cos(\\phi)+\\frac{1}{32} \\cos(2 \\phi)\\right) d\\phi\\\\\n&amp;\\underbrace{=}_{z=\\exp(\\imath \\phi)}&amp; \\frac{\\pi \\sqrt{2}}{4} \\oint\\limits_{|z|=1}K\\left(\\frac{19}{32} - \\frac{3}{8}\\frac{z+1/z}{2}+\\frac{1}{32} \\frac{z^2+1/z^2}{2} \\right) \\frac{dz}{\\imath z}\n\\end{eqnarray}</p>\n\n<p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>\n" } ]
Question: Prove known closed form for $\int_0^\infty e^{-x}I_0\left(\frac{x}{3}\right)^3\;dx$ Answer 1: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 2: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 3: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 4: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 5: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 6: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 7: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 8: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 9: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 10: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 11: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 12: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 13: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 14: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 15: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 16: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 17: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 18: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 19: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 20: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 21: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 22: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 23: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 24: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 25: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 26: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 27: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 28: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 29: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 30: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 31: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 32: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 33: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 34: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 35: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 36: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 37: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 38: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p> Answer 39: <p>I think many identities are involved here, so let we outline a partial answer (for now). We have $$ I_0(z) = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2},\qquad \mathcal{L}(I_0(z))=\frac{\mathbb{1}_{s&gt;1}}{\sqrt{s^2-1}}\tag{1} $$ and $$ I_0(z)^2 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}\sum_{k=0}^{n}\frac{1}{k!^2(n-k)!^2} = \sum_{n\geq 0}\frac{z^{2n}}{4^n n!^2}\sum_{k=0}^{n}\binom{n}{k}^2=\sum_{n\geq 0}\frac{z^{2n}\binom{2n}{n}}{4^n n!^2}\tag{2}$$ so: $$ \mathcal{L}(I_0(z)^2) =\frac{2}{\pi}\,K\!\left(\frac{2}{s}\right)\mathbb{1}_{s&gt;2}\tag{3}$$ and the connection with the complete elliptic integral of the first kind is less obscure. If we set $$ A_n = \sum_{k=0}^{n}\frac{\binom{2k}{k}}{k!^2(n-k)!^2}\tag{4}$$ we have: $$ I_0(z)^3 = \sum_{n\geq 0}\frac{z^{2n}}{4^n}A_n \tag{5} $$ and $$\begin{eqnarray*} \int_{0}^{+\infty}I_0(z)^3 e^{-3z}\,dz = \sum_{n\geq 0}\frac{(2n)!}{4^n 3^{2n+1}}\,A_n&amp;=&amp;\sum_{0\leq k\leq n}\frac{(2n)!\binom{2k}{k}}{4^n 3^{2n+1} k!^2 (n-k)!^2}\\&amp;=&amp;\sum_{k\geq 0}\frac{(2k)!}{k!^4}\sum_{n\geq k}\frac{(2n)!}{4^n 3^{2n+1}(n-k)!^2}.\tag{6}\end{eqnarray*}$$ The definitive connection with a singular value <em>should</em> now be a consequence of the identity $$ K(k)^2 = \frac{\pi^2}{4}\sum_{n\geq 0}\left(\frac{(2n-1)!!}{(2n)!!}\right)^3 (2kk')^{2n} \tag{7}$$ that holds for any $0&lt;k&lt;\frac{1}{\sqrt{2}}$ due to the (AGM-like) invariance properties of the complete elliptic integral of the first kind.</p> Answer 40: <p>I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &amp;=&amp; \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &amp;=&amp;-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &amp;\underbrace{=}_{v=\tan(u/2)}&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &amp;=&amp; 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &amp;\underbrace{=}_{v=\tan(t/2)}&amp; 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &amp;\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &amp;=&amp; \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &amp;\underbrace{=}_{y=z/(z-1)}&amp;\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp; \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &amp;\underbrace{=}_{y\leftarrow 2y-1}&amp; \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &amp;\underbrace{=}_{y\leftarrow \sqrt{y}}&amp;\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &amp;\underbrace{=}_{y=\sin(\phi)}&amp; \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &amp;\underbrace{=}_{\phi=\phi/2}&amp; \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &amp;\underbrace{=}_{z=\exp(\imath \phi)}&amp; \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray}</p> <p>I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.</p>
The fractional part of $n\log(n)$
2,110,014
12
https://math.stackexchange.com/questions/2110014/the-fractional-part-of-n-logn
<p>When I was thinking about my <a href="https://math.stackexchange.com/questions/1892135/about-a-mysterious-sequence-who-seems-to-follow-some-patterns">other question</a> on the sequence </p> <p>$$p(n)=\min_a\left\{a+b,\ \left\lfloor\frac {2^a}{3^b}\right\rfloor=n\right\}$$</p> <p>I found an interesting link with the sequence</p> <p>$$q(n)=\{n\log(n)\}=n\log(n)-[n\log(n)]$$</p> <p>the fractional part of $n\log(n)$.</p> <p>If we draw the sequence $q$, we get this (for $n$ up to $520$, $5\,000$ and $30\,000$ respectively):</p> <p><a href="https://i.stack.imgur.com/t8aBX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t8aBX.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/A6Hzs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A6Hzs.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/qYdiJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qYdiJ.png" alt="enter image description here"></a></p> <p>We can see some <em>gaps</em> looking like parabolas.</p> <blockquote> <p>What is causing those gaps? Why do they look like this?</p> </blockquote> <p>In my <a href="https://math.stackexchange.com/questions/1892135/about-a-mysterious-sequence-who-seems-to-follow-some-patterns">other question</a>, we can observe similar structures:</p> <p><a href="https://i.stack.imgur.com/gc30V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gc30V.png" alt="enter image description here"></a></p> <p>An answer is telling me that it is due to the continued fraction of $\log2/\log 3$.</p> <blockquote> <p>Could the two questions be linked?</p> </blockquote> <hr> <p><strong>Edit.</strong></p> <p>Thanks to Gottfried Helms idea, here is some more drawings with the moduli varying.</p> <p>The moduli will take those values respectively:</p> <p>$$1,2,3,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90,100.$$</p> <p>A modulus of $1$ corresponding to the fractional part.</p> <p><a href="https://i.stack.imgur.com/cN1zc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cN1zc.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/kuVB0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kuVB0.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/AUPZn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AUPZn.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/bdBJh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bdBJh.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/Ljip5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ljip5.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/ZG4YY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZG4YY.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/m2rbW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m2rbW.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/8sAVI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8sAVI.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/rsnfc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsnfc.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/wcAyb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wcAyb.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/Ov1hH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ov1hH.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/2j7aX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2j7aX.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/hyyWL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hyyWL.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/29Q09.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/29Q09.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/vf0EO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vf0EO.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/35Ke5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35Ke5.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/DnLk3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DnLk3.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/nVfdc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nVfdc.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/Nfqsb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nfqsb.png" alt="enter image description here"></a></p>
[ { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" }, { "answer_id": 2110198, "score": 6, "text": "<p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression\n$$\\{q(n+1)-q(n)\\}=\\{(n+1)\\log(n+1)-n\\log(n)\\}=\\{n\\log(1+\\frac{1}{n})+\\log(n+1)\\}$$\nThe first summand $n\\log(1+\\frac{1}{n})\\sim 1-\\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$.\nSo if $n+1\\sim e^k$ or equivalently $\\log(n+1)\\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\\sim 54, e^5 \\sim 148, e^5 \\sim 403$ which is the bottom of the \"parabolas\". A little bit before it $\\{q(n+1)-q(n)\\}\\sim\\{\\log(n+1)\\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p>\n\n<p>If for example $n+1\\sim e^{5.5}\\sim 255$ then the difference above is more or less 1/2 and then you will see two \"parabolas\" one above the other. The same argument will work for $n\\sim e^{k +\\frac{p}{q}}$ when you fix the rational $\\frac{p}{q}$.</p>\n" }, { "answer_id": 2146476, "score": 3, "text": "<p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p>\n\n<p>I*ve looked at your function mod 10 and 100. Nice pictures... \n<a href=\"https://i.stack.imgur.com/qNChb.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qNChb.png\" alt=\"mod 10\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/oFIeT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oFIeT.png\" alt=\"mod 100\"></a> </p>\n\n<p>Ok, just for fun: \"fine wood\". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:\n<a href=\"https://i.stack.imgur.com/O2IW1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/O2IW1.png\" alt=\"finewood mod 100\"></a></p>\n" } ]
Question: The fractional part of $n\log(n)$ Answer 1: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 2: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 3: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 4: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 5: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 6: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 7: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 8: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 9: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 10: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 11: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 12: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 13: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 14: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 15: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 16: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 17: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 18: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 19: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 20: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 21: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 22: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 23: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 24: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 25: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 26: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 27: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 28: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 29: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 30: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 31: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 32: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 33: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 34: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 35: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 36: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 37: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 38: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p> Answer 39: <p>While this is not a complete answer, in order to get some insight to this phenomenon, consider the expression $$\{q(n+1)-q(n)\}=\{(n+1)\log(n+1)-n\log(n)\}=\{n\log(1+\frac{1}{n})+\log(n+1)\}$$ The first summand $n\log(1+\frac{1}{n})\sim 1-\frac{1}{2n}$ which is almost an integer and doesn't affect too much on that expression. It follows that $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$. So if $n+1\sim e^k$ or equivalently $\log(n+1)\sim k$ then the difference above is almost zero. You can see this in your diagram at $e^4\sim 54, e^5 \sim 148, e^5 \sim 403$ which is the bottom of the "parabolas". A little bit before it $\{q(n+1)-q(n)\}\sim\{\log(n+1)\}$ is a little bit smaller than 1 so that your graph is monotone decreasing, and similarly after it the graph is monotone increasing.</p> <p>If for example $n+1\sim e^{5.5}\sim 255$ then the difference above is more or less 1/2 and then you will see two "parabolas" one above the other. The same argument will work for $n\sim e^{k +\frac{p}{q}}$ when you fix the rational $\frac{p}{q}$.</p> Answer 40: <p><strong><em>Not an answer</em></strong> but some more illustration (by my own couriosity)</p> <p>I*ve looked at your function mod 10 and 100. Nice pictures... <a href="https://i.stack.imgur.com/qNChb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNChb.png" alt="mod 10"></a></p> <p><a href="https://i.stack.imgur.com/oFIeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFIeT.png" alt="mod 100"></a> </p> <p>Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times: <a href="https://i.stack.imgur.com/O2IW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2IW1.png" alt="finewood mod 100"></a></p>
Ideas for defining a &quot;size&quot; which informally measures subsets of rationals to eachother?
2,111,187
12
https://math.stackexchange.com/questions/2111187/ideas-for-defining-a-size-which-informally-measures-subsets-of-rationals-to-ea
<p>EDIT: I posted an answer to this question. Can somone check?</p> <p>Consider set $T_1,T_2,...T_p$ which are subsets of $\mathbb{Q}$</p> <p>I want to create a new definition of "size" that distinguishes between subsets of $\mathbb{Q}$ that have "significantly" more elements inside of $\mathbb{Q}$ than other subsets. (Note that I want this size to apply to countably dense and countably finite subsets of $\mathbb{Q}$) .</p> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Morover, there are infinite rationals between every integer, further proving the previous statment.</p> <p>Unfortunately, formal measures, which assigns a weight for each singleton $\left\{x\right\}$ in $T_p$,</p> <p>$$\mu(T_p)=\sum_{x \in T_p}\mu(\left\{x\right\})$$</p> <p>would not be meaningful since assigning zero or positive weight would give subsets of $\mathbb{Q}$ zero or infinite measure. This does not distinguish which of those subsets could have significantly more or less elements inside of $\mathbb{Q}$. </p> <p>Cardinality is also a problem, since it counts the number of elements, rather than determine which subsets $\mathbb{Q}$ have more elements in $\mathbb{Q}$. Morover, the cardinality of any countably infinite set is infinity.</p> <p>However, informal measures can be used. For example, the <a href="https://en.wikipedia.org/wiki/Natural_density" rel="nofollow noreferrer">asymptotic density</a> of a subset of $\mathbb{N}$ is the number elements the subset "fills" of natural numbers between $[0,b]$ as $b\to\infty$.</p> <p>How can we create a new definition of "size" that constructs an informal measure of the subsets of $\mathbb{Q}$ and meets the following requirments?</p> <p>-If $T_1=T_2$ and $\mu(T_1)=\mu(T_2)$</p> <p>-If $T_1\subseteq T_2$ then $\mu(T_1) \le \mu(T_2)$</p>
[ { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" }, { "answer_id": 2278065, "score": 2, "text": "<p>There is a \"natural\" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \\to \\infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p>\n\n<p>There are $\\phi(k)$ fractions with denominator $k$ in each unit interval. <a href=\"https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae\" rel=\"nofollow noreferrer\">Wikipedia</a> states that $\\sum\\limits_{i=1}^b \\phi(k) \\approx \\frac 3{\\pi^2}b^2$ so we have about $\\frac 6{\\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br>\n${}{}{}{}$\nIt seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p>\n" }, { "answer_id": 2289865, "score": 2, "text": "<p>I <strong>strongly</strong> recommend that you read <a href=\"https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B\" rel=\"nofollow noreferrer\">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href=\"http://www.sciencedirect.com/science/article/pii/S0001870802000129\" rel=\"nofollow noreferrer\">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a \"numerosity\" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\\mathbb{N}$) such that:</p>\n\n<p>(1) If $A, B \\in S$ are such that $A \\subset B$, then $v(A) &lt; v(B)$;\n(2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \\sqcup B) = v(A' \\sqcup B')$ and $v(A \\times B) = v(A' \\times B')$ (it preserves disjoint unions and products).</p>\n\n<p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p>\n\n<p>(3) If $A \\simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p>\n\n<p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p>\n" }, { "answer_id": 2287326, "score": 1, "text": "<p>How about $\\mu: \\mathcal{P}(\\mathbb{Q})\\to \\mathbb{N}\\cup \\{\\infty\\}$ defined by $\\mu(T)=\\limsup_{a&lt;b\\in \\mathbb{Q}} |T\\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p>\n" }, { "answer_id": 2289023, "score": 1, "text": "<p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality.\nAt the same time, our experience suggests the opposite:</p>\n\n<blockquote>\n <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p>\n</blockquote>\n\n<p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence \n$$ \\left(\\mathbb Z^+\\right)^2\\leftrightarrow \\mathbb Z^+,$$\nusing the functions\n$$\\begin{cases}\nz(x,y) = \\dfrac12\\left((x+y)^2 - x - 3y +2\\right)\\\\\nx(z) = z-\\dfrac12(s(z)-1)(s(z)-2)\\\\\ny(z) = s(z)-x(z),\n\\end{cases}\\tag1$$\nwhere\n$$s(z) = \\left[\\sqrt{2z-{7\\over4}}-{1\\over2}\\right] + 2\\tag2$$</p>\n\n<p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href=\"http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29\" rel=\"nofollow noreferrer\">Wolfram Alfa</a>). Therefore, the Cantor's consequence is\n$$q_i = \\left\\{ {1\\over1}, {1\\over2}, {2\\over1}, {1\\over3}, {2\\over2}, {3\\over1}, {1\\over4}, {2\\over3}, {3\\over2}, {4\\over1}, {1\\over5}, {2\\over4}, {3\\over3}, {4\\over2}, {5\\over1}, {1\\over6}, {2\\over5}, {3\\over4}, {4\\over3}, {5\\over2}, {5\\over2},\\dots \\right\\}$$</p>\n\n<p>In my opinion, this order can be used as the \"natural\" order of positive rational numbers.</p>\n\n<p>For example, natural series displayed in consequence of Cantors indexes\n$$N_n = {1\\over2}\\left((n+1)^2 - n - 3 +2\\right) = {n(n+1)/2},$$\nfractions with odd numerator and even denominator - in consequence\n$$H_n = \\left\\{2, 7, 9, 16, 18, 20,\\dots \\right\\} .$$</p>\n\n<p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p>\n\n<p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>\n" } ]
Question: Ideas for defining a &quot;size&quot; which informally measures subsets of rationals to eachother? Answer 1: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 2: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 3: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 4: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 5: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 6: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 7: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 8: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 9: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 10: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 11: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 12: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 13: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 14: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 15: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 16: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 17: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 18: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 19: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 20: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 21: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 22: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 23: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 24: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 25: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 26: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 27: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 28: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 29: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 30: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 31: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 32: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 33: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 34: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 35: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 36: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 37: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 38: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 39: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 40: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 41: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 42: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 43: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 44: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 45: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 46: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 47: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 48: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 49: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 50: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 51: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 52: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 53: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 54: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 55: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 56: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 57: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 58: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 59: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 60: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 61: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 62: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 63: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 64: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 65: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 66: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 67: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 68: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 69: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 70: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 71: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 72: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 73: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 74: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 75: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 76: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p> Answer 77: <p>There is a "natural" extension of the natural density to the rationals, but I am not sure how useful it is. The set of rationals in $[-b,b]$ with denominator less than $b$ is finite, so we can compute the fraction of these rationals that are in $T$ and ask if there is a limit as $b \to \infty$. We want to consider the rationals when expressed in lowest terms. This clearly meets your request that the density of a superset is at least as great as that of a subset and that sets which are negatives of each other get the same density. </p> <p>There are $\phi(k)$ fractions with denominator $k$ in each unit interval. <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Other_formulae" rel="nofollow noreferrer">Wikipedia</a> states that $\sum\limits_{i=1}^b \phi(k) \approx \frac 3{\pi^2}b^2$ so we have about $\frac 6{\pi^2}b^3$ rationals in $[-b,b]$ with denominator $b$ or less.<br> ${}{}{}{}$ It seems one density that would be interesting would be the density of rationals with odd numerators. This is all the ones with even denominators and half of the ones with odd denominators. We need the sum of the totient function over the even numbers to get this, but I haven't found one.</p> Answer 78: <p>I <strong>strongly</strong> recommend that you read <a href="https://www.cambridge.org/core/journals/review-of-symbolic-logic/article/measuring-the-size-of-infinite-collections-of-natural-numbers-was-cantors-theory-of-infinite-number-inevitable/325464FFF1E318E3E51A80E652FA1C5B" rel="nofollow noreferrer">Mancosu's article on counting principles</a>, as he reviews the interesting literature that has emerged out of this question. In particular, following the <a href="http://www.sciencedirect.com/science/article/pii/S0001870802000129" rel="nofollow noreferrer">work of Benci and Di Nasso</a> (which is the most promising line of attack), he analyzes the question as being equivalent to the existence of a "numerosity" function $v$ from $S$ (the set of sets to be counted) to $N$ (the set which will be used for counting, typically, in the case of countable sets, $\mathbb{N}$) such that:</p> <p>(1) If $A, B \in S$ are such that $A \subset B$, then $v(A) &lt; v(B)$; (2) If $v(A) = v(A')$ and $v(B) = v(B')$, then $v(A \sqcup B) = v(A' \sqcup B')$ and $v(A \times B) = v(A' \times B')$ (it preserves disjoint unions and products).</p> <p>Ideally, we would want it to preserve bijections, but we know that is not compatible with (1) for infinite sets. Nevertheless, there may happen that we are able to impose some other restriction on the bijections to make it work, i.e. something like</p> <p>(3) If $A \simeq B$ (an isomorphism under an appropriate relation), then $v(A) = v(B)$.</p> <p>In the second article mentioned above, the authors show that the existence of (an interesting example of) such a function is equivalent to the existence of a certain type of ultrafilter on the natural numbers (Ramsey or selective ultrafilters), which incidentally shows that this question is independent of $\mathsf{ZFC}$. If you're familiar with ultrafilters, their appearance in this context is not very surprising, since they typically give rise to a measure. There are also interesting connections to non-standard analysis which are explored in the article.</p> Answer 79: <p>How about $\mu: \mathcal{P}(\mathbb{Q})\to \mathbb{N}\cup \{\infty\}$ defined by $\mu(T)=\limsup_{a&lt;b\in \mathbb{Q}} |T\cap (a,b)|$. It measures the asymptotic behavior of local density and incorporates your requirement. </p> Answer 80: <p>The theory of sets asserts that sets of integers and rational numbers are countable sets, that is, they have the same cardinality. At the same time, our experience suggests the opposite:</p> <blockquote> <p>For example, if we compared rational numbers to integers, we know there are significantly more rationals in rationals than integers in rationals. Moreover, there are infinite rationals between every integer, further proving the previous statement.></p> </blockquote> <p>To reconcile the theory with common sense helps the particular case of lemma to the theorem of Matiyasevich on the existence of a one-to-one correspondence $$ \left(\mathbb Z^+\right)^2\leftrightarrow \mathbb Z^+,$$ using the functions $$\begin{cases} z(x,y) = \dfrac12\left((x+y)^2 - x - 3y +2\right)\\ x(z) = z-\dfrac12(s(z)-1)(s(z)-2)\\ y(z) = s(z)-x(z), \end{cases}\tag1$$ where $$s(z) = \left[\sqrt{2z-{7\over4}}-{1\over2}\right] + 2\tag2$$</p> <p>The functions $(1-2)$ realize the Cantor ordering of the set - by increasing the sum $s=x + y$, and if the sums are equal, by increasing $x$ (see also <a href="http://www.wolframalpha.com/input/?i=plot+floor((sqrt(8floor(z)+-7)-1)%2F2)%2B2,+floor(z)+-+1%2F2+floor((sqrt(8floor(z)+-+7)-1)%2F2)(floor((sqrt(8floor(z)+-+7)-1)%2F2)%2B1)++,+z%3D1...29" rel="nofollow noreferrer">Wolfram Alfa</a>). Therefore, the Cantor's consequence is $$q_i = \left\{ {1\over1}, {1\over2}, {2\over1}, {1\over3}, {2\over2}, {3\over1}, {1\over4}, {2\over3}, {3\over2}, {4\over1}, {1\over5}, {2\over4}, {3\over3}, {4\over2}, {5\over1}, {1\over6}, {2\over5}, {3\over4}, {4\over3}, {5\over2}, {5\over2},\dots \right\}$$</p> <p>In my opinion, this order can be used as the "natural" order of positive rational numbers.</p> <p>For example, natural series displayed in consequence of Cantors indexes $$N_n = {1\over2}\left((n+1)^2 - n - 3 +2\right) = {n(n+1)/2},$$ fractions with odd numerator and even denominator - in consequence $$H_n = \left\{2, 7, 9, 16, 18, 20,\dots \right\} .$$</p> <p>Thus, each infinite subset of rational numbers can be displayed as the increasing sequence of Cantor indexes, allowing to form representative samples of a given volume. Reciprocal to the selective expectancy of $x\over y$, or its limit (when it converges), or some another selective statistics can be used as the required measure of subsets.</p> <p>At the same time, more informative (and therefore more promising) direction is the construction of empirical probability distributions.</p>
What is the weak*-topology on a set of probability measures?
2,111,245
12
https://math.stackexchange.com/questions/2111245/what-is-the-weak-topology-on-a-set-of-probability-measures
<p>While trying to answer <a href="https://math.stackexchange.com/questions/2109573/if-e-px-in-spane-p-ix-1-leq-i-leq-n-for-all-integrable-x">this question</a>, I've come across the notion of the weak*-topology on a set of probability measures. I'd like some clarification about what this means.</p> <p>More specifically, let $(\Omega, \mathcal{F})$ be a measurable space. We don't assume that $\Omega$ has any metric or topological structure. What does it mean to equip the set $\mathcal{M}$ of probability measures on this space with the weak*-star topology?</p> <p>I understand that the weak*-topology is the weakest topology on the dual space $V'$ of a normed vector space $V$ that makes the evaluation functionals defined by $\lambda_f(\phi) = \phi(f)$, $\phi \in V'$ and $f \in V$, continuous. What I don't understand is how $\mathcal{M}$ can be equipped with this topology as it's not a vector space.</p> <p>From what I've read, I think that measures in $\mathcal{M}$ are being identified with linear functionals on a space of measurable functions. For instance, $P \in \mathcal{M}$ gives rise to a linear functional $\phi$ on the normed linear space of bounded $\mathcal{F}$-measurable functions, equipped with the $\sup$-norm, by $\phi(f) := \int f dP$. Is something like this correct? Which underlying vector space of measurable functions should be used?</p> <p>I would appreciate if someone could please sketch the relevant theory for me and/or refer me to a comprehensive textbook treatment of this topic.</p> <hr> <p><strong>Addendum.</strong> My current understanding of this topic is summarized as part of my attempt to answer my own question <a href="https://math.stackexchange.com/questions/2109573/if-e-px-in-spane-p-ix-1-leq-i-leq-n-for-all-integrable-x">in the link above</a>.</p>
[ { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" }, { "answer_id": 2134037, "score": 7, "text": "<p>You can consider the set of probabilities over $\\mathcal{F}$ as a subset of the linear space $V(\\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p>\n\n<p>When $\\mathcal{F}$ is a Boolean algebra of subsets of $\\Omega$ (which in this case does not have to be a topological space) we define $S(\\mathcal{F})$ as the linear space generated by $\\{\\chi_A:\\ A\\in\\mathcal{F}\\}$ (the characteristic functions of the sets in $\\mathcal{F}$). $S(\\mathcal{F})$ is called the space of <em>simple functions</em>.</p>\n\n<p>Now, for every $f\\in S(\\mathcal{F})$, $\\Vert f\\Vert_s:=\\sup\\vert f\\vert&lt;\\infty$ so $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\\mathcal{F}),\\Vert\\cdot\\Vert_s)$ is the space $V(\\mathcal{F})$. In fact, on one hand every $\\lambda\\in V(\\mathcal{F})$ defines the bounded linear functional $f\\mapsto \\int f\\ d\\lambda$ (for $f\\in S(\\mathcal{F})$), on the other, every $x^*\\in S(\\mathcal{F})^*$ defines on $\\mathcal{F}$ the measure $\\lambda_{x^*}(A):=x^*(\\chi_A)$ (for $A\\in\\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p>\n\n<p>Having showed that the set of probabilities over $\\mathcal{F}$ is contained in the topological dual of the normed space $S(\\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p>\n\n<p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href=\"https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308\" rel=\"noreferrer\">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p>\n" }, { "answer_id": 2116838, "score": 4, "text": "<p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\\lvert f(x)\\rvert\\geq \\varepsilon$ is compact, for all $\\varepsilon&gt;0$), equipped with the supremum norm.</p>\n\n<p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href=\"https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem\" rel=\"nofollow noreferrer\">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p>\n\n<p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p>\n\n<p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>\n" } ]
Question: What is the weak*-topology on a set of probability measures? Answer 1: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 2: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 3: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 4: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 5: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 6: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 7: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 8: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 9: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 10: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 11: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 12: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 13: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 14: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 15: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 16: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 17: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 18: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 19: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 20: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 21: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 22: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 23: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 24: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 25: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 26: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 27: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 28: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 29: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 30: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 31: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 32: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 33: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 34: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 35: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 36: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 37: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 38: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p> Answer 39: <p>You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see <em>Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7</em>). We need to show that this space is the topological dual of another (locally convex) topological vector space.</p> <p>When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of <em>simple functions</em>.</p> <p>Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert&lt;\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See <em>Topological Riesz spaces and measure theory (Fremlin)</em> for a more complete reference. </p> <p>Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence.</p> <p><strong>PS:</strong> It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see <em>Measure theory Vol III (Fremlin)</em>, or <a href="https://terrytao.wordpress.com/2009/01/12/245b-notes-1-the-stone-and-loomis-sikorski-representation-theorems-optional/#more-1308" rel="noreferrer">this survey</a> by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.</p> Answer 40: <p>As far as I know, the right context here is that you have a locally compact Hausdorff space $X$, which give you the Banach space $C_0(X)$ of continuous functions vanishing at infinity (i.e. the set of all $x$ such that $\lvert f(x)\rvert\geq \varepsilon$ is compact, for all $\varepsilon&gt;0$), equipped with the supremum norm.</p> <p>Notice that each (signed, or even complex) regular Borel measure of bounded variation gives you a continuous linear functional on $C_0(X)$ (via integration). By <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="nofollow noreferrer">Riesz-Markov theorem</a>, the converse is also true: every continuous functional on $C_0(X)$ is given by integration with a (unique) complex, regular Borel measure of bounded variation.</p> <p>Thus $C_0(X)^*$ can be identified with the space of all complex, regular Borel measures of bounded variation, and thus you can put the weak and weak-$^*$ topology on the latter. In particular, you can see the regular Borel probability measures as a subset of $C_0(X)^*$ and endow them with the appropriate subspace topology.</p> <p>I haven't heard of a definition of weak-$^*$ topology on the space of measures that does not somehow reference a topology on the underlying measure space. You could find an isomorphism with a topological measure space and pull the measure through it, but I don't think the weak-$^*$ topology would be independent of the choice of isomorphism (though I think <em>weak</em> topology might be).</p>
What is the intuitive meaning of the Wronskian?
2,111,649
12
https://math.stackexchange.com/questions/2111649/what-is-the-intuitive-meaning-of-the-wronskian
<p>What is the intuition behind the Wronskian? I do not want an explanation that involves a rigorous proof. I just want to learn how to intuitively think about it.</p> <p>Edit: And since it is tied to the weight function in the theory of differential equations, it would also be great to have an intuitive connection between the two included in the answer!</p>
[ { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" }, { "answer_id": 2148803, "score": 15, "text": "<p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\\vec{f}\\,'(z)=A(z)\\vec f(z)$ with $A(z)\\in\\mathrm{Mat}_{n\\times n}$, for example by setting $$\\vec f =(\\begin{array}{cccc} f&amp; f' &amp; \\ldots &amp; f^{(n-1)}\\end{array})^T.$$ Let us take $n$ vector solutions $\\vec f_1,\\ldots, \\vec f_n$ and combine them into a matrix $F=(\\begin{array}{ccc}\\vec f_1 &amp;\\ldots &amp;\\vec f_n\n\\end{array})$. Clearly, $F(z)\\in\\mathrm{Mat}_{n\\times n}$ satisfies the same equation\n$$F'(z)=A(z)F(z).\\tag{1}$$\nWhile it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely,\n$$W'(z)=W(z)\\operatorname{Tr}A(z),$$\nwhich implies that $W(z)=\\mathrm{const}\\cdot \\exp \\int \\operatorname{Tr}A(z)\\,dz $. This specific combination is called wronskian. Its merits are :</p>\n\n<ul>\n<li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li>\n<li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li>\n</ul>\n" }, { "answer_id": 2153723, "score": 6, "text": "<p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: \n$$\n\\text{if}\\quad \\frac{d A}{dt}(t)=B(t)A(t)\\quad \\text{then}\\quad \\frac{d}{dt}\\left(\\det A(t)\\right) = \\mathrm{tr}\\, B(t)\\det A(t).$$ </p>\n\n<p>Here $A, B$ are square matrices and $\\mathrm{tr}$ stands for \"trace\" and this formula is often called \"Liouville's formula\".</p>\n\n<p>You apply it as follows. Consider the system of $n$ differential equations \n$$\n\\frac{d\\boldsymbol{x}}{dt} = B(t) \\boldsymbol x(t).$$\n(Boldface denotes elements of $\\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$ one forms the matrix \n$$\nA(t)=\\begin{bmatrix} \\boldsymbol{x}_1,\\ \\ldots\\, , \\boldsymbol{x}_n\\end{bmatrix}\\in\\mathbb{R}^{n\\times n}.$$</p>\n\n<blockquote>\n <p>The Wronskian determinant $\\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\\boldsymbol{x}_1\\ldots \\boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p>\n</blockquote>\n\n<p>For a single differential equation of order $n$ \n$$\nx^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\\ldots +b_0(t) x(t) = 0$$ \none introduces the vector \n$$\n\\boldsymbol{x}(t)= \\begin{bmatrix} x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{bmatrix}$$ \nand the matrix (\"companion matrix\") \n$$\nB(t)=\\begin{bmatrix} 0&amp; 1&amp; 0&amp; \\ldots&amp; \\ldots&amp; \\ldots \\\\ \n0 &amp; 0 &amp; 1 &amp; 0 &amp; \\ldots&amp;\\ldots \\\\ \n&amp; &amp; &amp;\\vdots &amp; &amp; \\\\\n0&amp; \\ldots&amp; \\ldots&amp; \\ldots&amp; 0 &amp; 1 \\\\ -b_0(t) &amp; -b_1(t)&amp; \\ldots &amp;\\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\\end{bmatrix}$$\nso that the equation is equivalent to $\\frac{d\\boldsymbol{x}}{dt}=B(t)\\boldsymbol{x}(t)$. Often, one says that $\\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p>\n\n<blockquote>\n <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\\mathrm{tr}\\, (B(t))=-b_{n-1}(t)$. </p>\n</blockquote>\n\n<p><strong>Further development</strong>. This interpretation fits well in the frame of <a href=\"https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)\" rel=\"noreferrer\">Liouville's theorem for Hamiltonian systems</a>. See also <a href=\"https://math.stackexchange.com/q/308255/8157\">here</a>.</p>\n" } ]
Question: What is the intuitive meaning of the Wronskian? Answer 1: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 2: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 3: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 4: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 5: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 6: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 7: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 8: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 9: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 10: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 11: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 12: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 13: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 14: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 15: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 16: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 17: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 18: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 19: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 20: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 21: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 22: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 23: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 24: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 25: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 26: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 27: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 28: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 29: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 30: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 31: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 32: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 33: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 34: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 35: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 36: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 37: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 38: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p> Answer 39: <p>Every homogeneous linear ordinary differential equation $L^{(n)}f=0$ of order $n$ can be equivalently put into a form of linear system $\vec{f}\,'(z)=A(z)\vec f(z)$ with $A(z)\in\mathrm{Mat}_{n\times n}$, for example by setting $$\vec f =(\begin{array}{cccc} f&amp; f' &amp; \ldots &amp; f^{(n-1)}\end{array})^T.$$ Let us take $n$ vector solutions $\vec f_1,\ldots, \vec f_n$ and combine them into a matrix $F=(\begin{array}{ccc}\vec f_1 &amp;\ldots &amp;\vec f_n \end{array})$. Clearly, $F(z)\in\mathrm{Mat}_{n\times n}$ satisfies the same equation $$F'(z)=A(z)F(z).\tag{1}$$ While it is difficult and usually impossible to solve the latter system explicitly (this is equivalent to the original problem), one can easily show that the specific combination $W(z)=\operatorname{det} F(z)$ satisfies a scalar linear ODE of 1st order, namely, $$W'(z)=W(z)\operatorname{Tr}A(z),$$ which implies that $W(z)=\mathrm{const}\cdot \exp \int \operatorname{Tr}A(z)\,dz $. This specific combination is called wronskian. Its merits are :</p> <ul> <li><p>Its form can be explicitly computed as shown above ; this is because the only unknown piece is the value of the constant which can be calculated even when the solutions themselves are e.g. complicated special functions (it suffices to keep only their asymptotics).</p></li> <li><p>Vanishing of $W(z)$ is clearly equivalent to linear dependence of solutions.</p></li> </ul> Answer 40: <p>To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ </p> <p>Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula".</p> <p>You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$</p> <blockquote> <p>The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$.</p> </blockquote> <p>For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0&amp; 1&amp; 0&amp; \ldots&amp; \ldots&amp; \ldots \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; \ldots&amp;\ldots \\ &amp; &amp; &amp;\vdots &amp; &amp; \\ 0&amp; \ldots&amp; \ldots&amp; \ldots&amp; 0 &amp; 1 \\ -b_0(t) &amp; -b_1(t)&amp; \ldots &amp;\ldots &amp; -b_{n-2}(t) &amp; -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the <em>phase space</em> of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) </p> <blockquote> <p>In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. </p> </blockquote> <p><strong>Further development</strong>. This interpretation fits well in the frame of <a href="https://en.wikipedia.org/wiki/Liouville&#39;s_theorem_(Hamiltonian)" rel="noreferrer">Liouville's theorem for Hamiltonian systems</a>. See also <a href="https://math.stackexchange.com/q/308255/8157">here</a>.</p>
Proving property of Fibonacci sequence with group theory
2,117,184
12
https://math.stackexchange.com/questions/2117184/proving-property-of-fibonacci-sequence-with-group-theory
<p>I stumbled upon this problem on a chapter about group theory, and I can't seem to get a grasp on it:</p> <blockquote> <p>Let $p$ be a prime and $F_1 = F _2 = 1, F_{n +2} = F_{n +1} + F_n$ be the Fibonacci sequence. Show that $F_{2p(p^2−1)}$ is divisible by $p$.</p> </blockquote> <p>The book gives the following hint:</p> <blockquote> <p>Look at the group of $2×2$ matrices mod $p$ with determinant $± 1$</p> </blockquote> <p>I know I have to use lagrange theorem at some point, but how do I stablish the link between the fibonacci sequence and the matrices? I mean, I know that there's a special $2×2$ matrix such that it yields the $n^{th}$ fibonacci number when taking the $n^{th}$ power, I even used it to obtain a general formula for the numbers during my linear algebra course, but how can I use that to prove this?</p>
[ { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" }, { "answer_id": 2117238, "score": 9, "text": "<p>Call the group $G = \\left\\{A \\in (\\mathbb{Z}/p\\mathbb{Z})^{2\\times 2} \\ \\middle\\vert\\ \\det A = \\pm 1\\right\\}$.</p>\n\n<p>First, we count the order of $G$. We have elements $\\left(\\array{a &amp; b \\\\ c &amp; d}\\right)$, with $a,b,c,d \\in \\mathbb{Z}/p\\mathbb{Z}$ and $ad - bc \\equiv \\pm 1$ ($\\equiv$ means congruent modulo $p$).\nLet us first count the elements with determinant one. $\\mathbb{Z}/p\\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \\equiv 1$ is equivalent to $(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})\\ \\lor\\ (a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))$. Counting the possibilities gives\n$$ \\underbrace{(d \\equiv 0\\ \\land\\ c \\equiv -b^{-1})}_{\\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\\ \\lor\\ \\underbrace{(a \\not\\equiv 0 \\land d \\equiv a^{-1}(1 + bc))}_{\\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$\nfor a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$.\nBy symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p>\n\n<p>Now, let $M = \\left(\\array{1 &amp; 1 \\\\ 1 &amp; 0}\\right)$. You already know that $M^n = \\left(\\array{F_{n+1} &amp; F_{n} \\\\ F_{n} &amp; F_{n-1}}\\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\\left&lt;M\\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to\n$$ \\left(\\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\\\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\\right) \\equiv \\left(\\array{1 &amp; 0 \\\\ 0 &amp; 1}\\right) \\mod{p}.$$\nThe off-diagonal components of this equation are what you wanted to show.</p>\n" }, { "answer_id": 2117199, "score": 5, "text": "<p>The matrix $\\left[ \\begin{array}{cc} 1 &amp; 1 \\\\ 1 &amp; 0 \\end{array} \\right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \\times 2$ matrices $\\bmod p$ with determinant $\\pm 1$ (a certain subgroup of $GL_2(\\mathbb{F}_p)$). If you count the size of this group... </p>\n" } ]
Question: Proving property of Fibonacci sequence with group theory Answer 1: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 2: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 3: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 4: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 5: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 6: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 7: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 8: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 9: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 10: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 11: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 12: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 13: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 14: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 15: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 16: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 17: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 18: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 19: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 20: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 21: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 22: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 23: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 24: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 25: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 26: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 27: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 28: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 29: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 30: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 31: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 32: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 33: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 34: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 35: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 36: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 37: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 38: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p> Answer 39: <p>Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$.</p> <p>First, we count the order of $G$. We have elements $\left(\array{a &amp; b \\ c &amp; d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$.</p> <p>Now, let $M = \left(\array{1 &amp; 1 \\ 1 &amp; 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left&lt;M\right&gt;$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} &amp; F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} &amp; F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 &amp; 0 \\ 0 &amp; 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.</p> Answer 40: <p>The matrix $\left[ \begin{array}{cc} 1 &amp; 1 \\ 1 &amp; 0 \end{array} \right]$ has the property that its powers encode Fibonacci numbers. It lives in the group of $2 \times 2$ matrices $\bmod p$ with determinant $\pm 1$ (a certain subgroup of $GL_2(\mathbb{F}_p)$). If you count the size of this group... </p>
In how many ways can you colorize the vertices of a grid with 4 colors so that every unit-square vertices are all of a different color?
2,079,165
11
https://math.stackexchange.com/questions/2079165/in-how-many-ways-can-you-colorize-the-vertices-of-a-grid-with-4-colors-so-that-e
<p>In how many ways can you colorize the vertices of an <em>$(n\times n)$-square</em> forming an <em>$(n+1)$-grid</em> with $4$ colors so that every unit-square vertices are all of a different color ?</p> <p>Precisions:</p> <p><em>$(n\times n)$-square</em>: square whose edge length is $n$.</p> <p><em>$(n+1)$-grid</em>: square grid containing $(n+1)$ vertices per side, and composed of $n^2$ unit-squares of dimension $(1\times 1)$.</p> <p>Here are some examples for $n=1,2,3$</p> <p><a href="https://i.stack.imgur.com/npobF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/npobF.png" alt="figure-1"></a></p> <p><strong>My attempt</strong>:</p> <p>I have posted a similar question <a href="https://math.stackexchange.com/questions/2078594/how-many-ways-are-there-to-color-vertexes-of-a-n-times-n-square-that-in-every">here</a> I tried to do the same way (Because the answer is like that too ($24\times (2^{n}-1)$)) but it didn't answer.</p> <p>Any hints?</p>
[ { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" }, { "answer_id": 2088242, "score": 7, "text": "<p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p>\n\n<p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p>\n\n<p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have\n$$\nm:\\ \\ldots abc\\ldots,\n$$\nwhere $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced,\n$$\n\\begin{aligned}\nm:&amp;\\ \\ldots abc\\ldots\\\\\nm+1:&amp;\\ \\ldots cda\\ldots,\n\\end{aligned}\n$$\nwhere $d\\notin\\{a,b,c\\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p>\n\n<p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$,\n$$\n\\begin{aligned}\nm:&amp;\\ abababa\\ldots &amp; m:&amp;\\ abababa\\ldots\\\\\nm+1:&amp;\\ cdcdcdc\\ldots &amp; m+1:&amp;\\ dcdcdcd\\ldots.\n\\end{aligned}\n$$\nThe number of colors used in column $n$ may be changed by the addition of row $m+1$.</p>\n\n<p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p>\n\n<p>We are now in a position to write a system of recurrences for the number of colorings of an $m\\times n$ grid. Define\n$$\n\\begin{aligned}\nQ_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\\\\nR_{m,n}&amp;=\\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\\\\nS_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\\\\nT_{m,n}&amp;=\\text{num. colorings with three colors in row $m$, three colors in column $n$.}\n\\end{aligned}\n$$\nThen\n$$\n\\begin{aligned}\n\\begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\\\ S_{1,1} &amp; T_{1,1}\\end{bmatrix}&amp;=24\\cdot\\begin{bmatrix}1 &amp; 0\\\\ 0 &amp; 0\\end{bmatrix}\\\\\n\\begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\\\ S_{m,n} &amp; T_{m,n}\\end{bmatrix}&amp;=\\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\\\ S_{m-1,n} &amp; T_{m-1,n}\\end{bmatrix}=\\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\\end{bmatrix}\n\\end{aligned}\n$$\nFrom this it is not hard to obtain the result you need. </p>\n\n<p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\\cdot2^{m+1-2}=24\\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\\cdot\\left(2^{m-1}+2^{n-1}-1\\right)$ colorings.</p>\n\n<p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p>\n\n<p><strong>Proposition:</strong> If an $m\\times n$ grid, $m,n\\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p>\n\n<ol>\n<li>Only two colors are used in each row of vertices.</li>\n<li>Only two colors are used in each column of vertices.</li>\n</ol>\n\n<p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\\ldots$, then the colors in the deleted row can only be $cdcdcd\\ldots$ or $dcdcdc\\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\\square$</p>\n\n<p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\\ldots$ and let row $1$ be colored $cdcdcd\\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\\ldots$ or $bababa\\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\\ldots$ or $dcdcdc\\ldots$. Hence there are $24\\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\\cdot(2^m-1)$ colorings.</p>\n" }, { "answer_id": 2089336, "score": 3, "text": "<p>I thought about this for a while and came to a conclusion:</p>\n\n<p>No. of Ways to Colour = $4!*(2^n-1)$!</p>\n\n<p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p>\n\n<p><a href=\"https://i.stack.imgur.com/VOTIp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VOTIp.png\" alt=\"Fig. 1\"></a></p>\n\n<p>Then, to colour the \"layer\" around it, we have 2 ways to colorize \nthe vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p>\n\n<p><a href=\"https://i.stack.imgur.com/8EHDr.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8EHDr.png\" alt=\"Fig. 2\"></a></p>\n\n<p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p>\n\n<p><a href=\"https://i.stack.imgur.com/pizqe.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pizqe.png\" alt=\"Fig. 3\"></a></p>\n\n<p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to \"choose\" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p>\n\n<p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p>\n\n<p>Thus, we must re-state our formula as $2*(n-1)$.</p>\n\n<p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p>\n\n<p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>\n" } ]
Question: In how many ways can you colorize the vertices of a grid with 4 colors so that every unit-square vertices are all of a different color? Answer 1: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 2: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 3: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 4: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 5: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 6: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 7: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 8: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 9: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 10: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 11: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 12: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 13: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 14: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 15: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 16: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 17: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 18: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 19: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 20: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 21: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 22: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 23: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 24: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 25: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 26: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 27: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 28: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 29: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 30: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 31: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 32: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 33: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 34: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 35: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 36: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 37: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 38: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p> Answer 39: <p>Consider a rectangular grid containing $m$ squares in each column and $n$ squares in each row. Label the rows of squares by the indices $1$, $2$, ..., $m$ and the rows of vertices by the indices $0$, $1$, ..., $m$ and do similarly for columns. Suppose that the vertices of the grid have been colored in such a way that every one of the $mn$ squares contains a vertex of each color. </p> <p>Now suppose that a new row of squares is added to the grid. Can the previous coloring be extended to the last row of vertices (row $m+1$)? If so, in how many ways? The answer to the first question is that it can. This is so because we can color row $m+1$ the same way we colored row $m-1$. To answer the second question, notice that, once the color of any vertex in the row is decided, the colors of all the remaining vertices in the row are determined, assuming a coloring is possible at all. (Successively color neighbors of previously colored vertices until the whole row is colored. Each vertex that gets colored is the fourth member of a square in which three vertices have already been colored. Either the fourth color will be uniquely colored or two of the three colors already used will be the same, making the coloring untenable.) Since the first vertex can be colored in two ways, we conclude that row $m+1$ has at least one coloring and at most two colorings.</p> <p>To determine whether the number of colorings is one or two, observe that if the coloring of row $m$ uses three or four colors, than at some point in the row we will have $$ m:\ \ldots abc\ldots, $$ where $a$, $b$, and $c$ are different colors. The colors in the corresponding positions in row $m+1$ are then forced, $$ \begin{aligned} m:&amp;\ \ldots abc\ldots\\ m+1:&amp;\ \ldots cda\ldots, \end{aligned} $$ where $d\notin\{a,b,c\}$. In fact, the colors in the corresponding positions in row $m-1$ are forced in the same way, and rows $m-1$ and $m+1$ are therefore colored identically. So in this case there is one coloring of row $m+1$. Also, the number of colors used in column $n$ will not be changed by the addition of row $m+1$.</p> <p>If row $m$ uses only two colors, then two colorings are possible for row $m+1$, $$ \begin{aligned} m:&amp;\ abababa\ldots &amp; m:&amp;\ abababa\ldots\\ m+1:&amp;\ cdcdcdc\ldots &amp; m+1:&amp;\ dcdcdcd\ldots. \end{aligned} $$ The number of colors used in column $n$ may be changed by the addition of row $m+1$.</p> <p>Observe that if a row uses only two colors, then all preceding and subsequent rows only use two colors; if a row uses three or more colors, then all preceding and subsequent rows use three or more colors. Clearly similar considerations apply to columns.</p> <p>We are now in a position to write a system of recurrences for the number of colorings of an $m\times n$ grid. Define $$ \begin{aligned} Q_{m,n}&amp;=\text{num. colorings with two colors in row $m$, two colors in column $n$,}\\ R_{m,n}&amp;=\text{num. colorings with two colors in row $m$, three colors in column $n$,}\\ S_{m,n}&amp;=\text{num. colorings with three colors in row $m$, two colors in column $n$,}\\ T_{m,n}&amp;=\text{num. colorings with three colors in row $m$, three colors in column $n$.} \end{aligned} $$ Then $$ \begin{aligned} \begin{bmatrix}Q_{1,1} &amp; R_{1,1}\\ S_{1,1} &amp; T_{1,1}\end{bmatrix}&amp;=24\cdot\begin{bmatrix}1 &amp; 0\\ 0 &amp; 0\end{bmatrix}\\ \begin{bmatrix}Q_{m,n} &amp; R_{m,n}\\ S_{m,n} &amp; T_{m,n}\end{bmatrix}&amp;=\begin{bmatrix}Q_{m-1,n} &amp; Q_{m-1,n}+2R_{m-1,n}\\ S_{m-1,n} &amp; T_{m-1,n}\end{bmatrix}=\begin{bmatrix}Q_{m,n-1} &amp; R_{m,n-1}\\ Q_{m,n-1}+2S_{m,n-1} &amp; T_{m,n-1}\end{bmatrix} \end{aligned} $$ From this it is not hard to obtain the result you need. </p> <p>For a conceptual explanation, observe that $T_{m,n}=0$ for all $m$, $n$. So either all rows use only two colors, or all columns use only two colors, or both. If all rows use only two colors then, since there are $4!=24$ colorings for the first square in row $1$, and since the coloring of the first two rows of vertices is determined by the choice of coloring for this square, there are $24\cdot2^{m+1-2}=24\cdot2^{m-1}$ colorings of the grid in this case. Similarly, the number of colorings in the case that all columns use only two colors is $24\cdot2^{n-1}$. Using inclusion-exclusion, the total number of colorings of the grid is $24\cdot\left(2^{m-1}+2^{n-1}-1\right)$ colorings.</p> <p><strong>Added summary:</strong> Let's formalize and flesh out a few details of the conceptual explanation above.</p> <p><strong>Proposition:</strong> If an $m\times n$ grid, $m,n\ge1$, has been four colored so that each of the four colors is used in every square then at least one of the following holds.</p> <ol> <li>Only two colors are used in each row of vertices.</li> <li>Only two colors are used in each column of vertices.</li> </ol> <p><em>Proof:</em> Observe that if a row or column uses only two colors, then the colors in that row or column alternate, $ababab\ldots$. Now perform induction on the number of squares in the grid. The base case is the case of one square, for which the statement clearly holds. For the induction step, at least one of $m$, $n$ is greater than $1$. Let's say $m$ is. Then, by the induction hypothesis, the statement holds for the four coloring of the $(m-1)\times n$ grid obtained by deleting the last row of vertices. Suppose that the rows of this smaller grid each use only two colors. If the colors in the last row of this smaller grid are $ababab\ldots$, then the colors in the deleted row can only be $cdcdcd\ldots$ or $dcdcdc\ldots$, where $c$ and $d$ are the two colors different from $a$ and $b$. If the rows of the smaller grid each use three or more colors, then the columns each use only two colors. Furthermore, the deleted row (row $m$) is colored the same as row $m-1$, and hence introduces no new color into any column. $\square$</p> <p>Now if the rows each use only two colors, then let row $0$ be colored $ababab\ldots$ and let row $1$ be colored $cdcdcd\ldots$. There are $4!$ choices for the colors $a$, $b$, $c$, $d$. Then each of rows $2$, $4$, $6$, etc. must be colored either $ababab\ldots$ or $bababa\ldots$, and each of rows $3$, $5$, $7$, etc. must be colored either $cdcdcd\ldots$ or $dcdcdc\ldots$. Hence there are $24\cdot2^{m-1}$ colorings in this case. Similarly, in the case that the columns each use only two colors, there are $24\cdot2^{n-1}$ colorings. We over-count if we simply add the number of colorings in each of the two cases since there are $4!$ colorings in which both rows and columns each use only two colors. Hence the total number of colorings is $24\cdot(2^{m-1}+2^{n-1}-1)$. Specializing to the case where $m=n$, we get $24\cdot(2^m-1)$ colorings.</p> Answer 40: <p>I thought about this for a while and came to a conclusion:</p> <p>No. of Ways to Colour = $4!*(2^n-1)$!</p> <p>In the case of an nxn grid, you can consider it to be an $(n-1)$ by $(n-1)$ grid with $2n-1$ vertices wrapped around it. Next, imagine that the $n-1$ grid is <em>already coloured.</em> Look at this photo:</p> <p><a href="https://i.stack.imgur.com/VOTIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOTIp.png" alt="Fig. 1"></a></p> <p>Then, to colour the "layer" around it, we have 2 ways to colorize the vertices on the unit-box in the bottom righthand corner. Then, moving upwards, all the vertices up to the top-righthand corner are determined, since two of the colours have been defined by the $(n-1)$ grid and the bottom righthand corner of each box is chosen by the one below (ending at one of the 2 choices for the bottom righthand one). </p> <p><a href="https://i.stack.imgur.com/8EHDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EHDr.png" alt="Fig. 2"></a></p> <p>At the top righthand corner, we again meet 2 choices, since you have the top righthand vertex of the $(n-1)$ grid and another is coloured from the ones determined around the edge, and then the rest of the colours are determined. </p> <p><a href="https://i.stack.imgur.com/pizqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pizqe.png" alt="Fig. 3"></a></p> <p>Thus, for an n-sized grid there are $4*(n-1)$ ways of colorizing the vertices satisfying the conditions above. However, if we look more carefully, we can find cases where only one of these 2 choices are given to us and that in fact, we only ever get one of these 2 choices. Consider the case where one of our 2 colours to "choose" from is also the colour of the vertex 2 up, 1 left from the bottom righthand corner. If this was true, we would only have one choice in how we arrange the colours, and that would be the same as the penultimate column of the $(n-1)$ box. </p> <p>If this was true, another consequence would be that you could still choose one of 2 configurations in the top righthand corner, since we know the reflected columns fact. However, if in the first case we had 2 choices then by the time we had got to the top righthand corner, we would <em>not</em> have 2 choices there, since the columns would <em>not</em> be reflected.</p> <p>Thus, we must re-state our formula as $2*(n-1)$.</p> <p>Now, for n = 1, the number of ways of colouring it such that the colours are different with 4 colours is simply $4! = 4*3*2*1 = 24$.</p> <p>Thus, we derive the number of ways to colour an n by n grid using 4 colours such that every unit-square vertices are a different colour with 4 colours is $4!*(2^n-1)$.</p>
Prove that:$\int_{0}^{1}{x+x^2+\cdots+x^{2n}-2nx\over (1+x)\ln{x}}dx=\ln{\left[\left({2\over \pi}\right)^n(2n)!!\right]}$
2,079,609
11
https://math.stackexchange.com/questions/2079609/prove-that-int-01xx2-cdotsx2n-2nx-over-1x-lnxdx-ln-left
<p>Prove that</p> <p>$$\int_{0}^{1}{x+x^2+x^3+\cdots+x^{2n}-2nx\over (1+x)\ln{x}}dx=\ln{\left[\left({2\over \pi}\right)^n(2n)!!\right]}$$</p> <p>$n\ge1$</p> <p>My try:</p> <p>$${x+x^2+\cdots+x^{2n}\over 1+x}={1\over 1+x}\cdot{x(1-x^{2n})\over 1-x}=x\cdot{1-x^{2n}\over 1-x^2}$$</p> <p>$${x+x^2+\cdots+x^{2n}-2nx\over 1+x}=x\cdot{1-x^{2n}\over 1-x^2}-{2nx\over 1+x}$$</p> <p>$$\int_{0}^{1}\left(x\cdot{1-x^{2n}\over 1-x^2}-{2nx\over 1+x}\right)\cdot{1\over \ln{x}}dx$$</p> <p>I am stuck not quite sure what to do next, please help</p> <p>Edit</p> <p>$$\int_{0}^{1}\left(x\cdot{1-x^{2n}\over 1-x^2}-{2nx\over 1+x}\right)\cdot{1\over \ln{x}}dx$$</p>
[ { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" }, { "answer_id": 2079740, "score": 5, "text": "<p><strong>Hint</strong>. One may use the identity</p>\n\n<blockquote>\n <p>$$\n\\int_0^\\infty \\frac{1-e^{-su}}{u(e^u+1)}du=\\frac12\\log(\\pi)+\\log \\Gamma\\left(1+\\frac{s}2\\right)-\\log \\Gamma\\left(\\frac{1+s}2\\right), \\quad s&gt;0. \n$$ </p>\n</blockquote>\n\n<p>proved <a href=\"https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795\">here</a>. By the change of variable $u=-\\ln x$, $x=e^{-u}$, in the preceding integral, one gets\n$$\n\\int_{0}^{1}{x^s-1\\over (1+x)\\ln{x}}\\:dx=\\log\\left(\\frac{\\sqrt{\\pi}\\:\\Gamma\\left(1+\\frac{s}2\\right)}{\\Gamma(\\frac{1+s}2)} \\right), \\quad s&gt;0,\n$$ then rearranging the numerator of the given integral as\n$$\n\\int_{0}^{1}{(x^2-x)+(x^3-x)+\\cdots+(x^{2n}-x)\\over (1+x)\\ln{x}}\\:dx,\n$$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p>\n" }, { "answer_id": 2079834, "score": 4, "text": "<p>In <a href=\"https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558\">THIS ANSWER</a>, I showed that </p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\int_0^1 \\left(\\frac{x^{2n}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx=\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)}$$</p>\n</blockquote>\n\n<p>We can use this result to evaluate the integral </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\int_0^1 \\left(\\frac{x^{2n+1}-x^2}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx+\\log(4/\\pi)\\\\\\\\\n&amp;=\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\tag1\n\\end{align}$$</p>\n\n<hr>\n\n<p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\\to e^{-x}$ and evaluating the resulting <a href=\"https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847\">FRULLANI INTEGRAL</a>. Proceeding, we have </p>\n\n<p>$$\\int_0^1 \\frac{x^{2n}-x}{\\log(x)}\\,dx=\\log\\left(\\frac{2n+1}{2}\\right) \\tag 2$$</p>\n\n<hr>\n\n<p>Substituting $(2)$ into $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x^{2n+1}-x}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\log\\left(\\frac{2n+1}{2}\\right)+\\log(4/\\pi)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right)\\\\\\\\\n&amp;= \\log\\left(\\frac{2}{\\pi}(2n+1)\\right)-\\log\\left(\\frac{2}{\\pi}\\frac{(2n)!!}{(2n-1)!!}\\right) \\tag 3\\\\\\\\\n&amp;=\\log\\left(\\frac{(2n+1)!!}{(2n)!!}\\right)\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, we see that </p>\n\n<p>$$\\begin{align}\n\\int_0^1 \\left(\\frac{x+x^2+\\cdots +x^{2n}-2nx}{1+x}\\right)\\,\\frac{1}{\\log(x)}\\,dx&amp;=\\sum_{k=1}^n \\int_0^1 \\frac{x^{2k}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\int_0^1 \\frac{x^{2k+1}-x}{1+x}\\frac{1}{\\log(x)}\\,dx\\\\\\\\\n&amp;=\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;+\\sum_{k=1}^{n-1}\\log\\left(\\frac{2}{\\pi}(2n+1)\\right)\\\\\\\\\n&amp;-\\sum_{k=1}^n\\log\\left(\\frac{2}{\\pi}\\frac{(2k)!!}{(2k-1)!!}\\right)\\\\\\\\\n&amp;=\\log\\left(\\left(\\frac{2}{\\pi}\\right)^n (2n)!!\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2079755, "score": 2, "text": "<p>Hint: We may use the substitution $x\\to e^{-x};$ \n$$I(n) = \\int_0^\\infty \\left( \\frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\\frac{2ne^{-2t}}{1+e^{-t}}\\right)t^{-1}dt\\\\ = \\sum_{k=0}^\\infty \\int_0^\\infty\\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\\right) t^{-1}dt$$ \nand write the previous splitting in terms of $\\Gamma-$function </p>\n" } ]
Question: Prove that:$\int_{0}^{1}{x+x^2+\cdots+x^{2n}-2nx\over (1+x)\ln{x}}dx=\ln{\left[\left({2\over \pi}\right)^n(2n)!!\right]}$ Answer 1: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 2: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 3: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 4: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 5: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 6: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 7: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 8: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 9: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 10: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 11: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 12: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 13: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 14: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 15: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 16: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 17: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 18: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 19: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 20: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 21: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 22: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 23: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 24: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 25: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 26: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 27: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 28: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 29: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 30: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 31: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 32: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 33: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 34: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 35: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 36: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 37: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 38: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 39: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 40: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 41: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 42: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 43: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 44: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 45: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 46: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 47: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 48: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 49: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 50: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 51: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 52: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 53: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 54: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 55: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 56: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 57: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p> Answer 58: <p><strong>Hint</strong>. One may use the identity</p> <blockquote> <p>$$ \int_0^\infty \frac{1-e^{-su}}{u(e^u+1)}du=\frac12\log(\pi)+\log \Gamma\left(1+\frac{s}2\right)-\log \Gamma\left(\frac{1+s}2\right), \quad s&gt;0. $$ </p> </blockquote> <p>proved <a href="https://math.stackexchange.com/questions/1782731/evaluation-of-sum-n-1-infty-frac-1n-1-etann-without-using-the/1782795#1782795">here</a>. By the change of variable $u=-\ln x$, $x=e^{-u}$, in the preceding integral, one gets $$ \int_{0}^{1}{x^s-1\over (1+x)\ln{x}}\:dx=\log\left(\frac{\sqrt{\pi}\:\Gamma\left(1+\frac{s}2\right)}{\Gamma(\frac{1+s}2)} \right), \quad s&gt;0, $$ then rearranging the numerator of the given integral as $$ \int_{0}^{1}{(x^2-x)+(x^3-x)+\cdots+(x^{2n}-x)\over (1+x)\ln{x}}\:dx, $$ observing that each term $x^k-x$ rewrites $(x^k-1)-(x-1)$ yields the announced result.</p> Answer 59: <p>In <a href="https://math.stackexchange.com/questions/1821080/showing-int-01x2n-x-over-1x-cdotdx-over-lnx-ln-left2-over/1821558#1821558">THIS ANSWER</a>, I showed that </p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\int_0^1 \left(\frac{x^{2n}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx=\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)}$$</p> </blockquote> <p>We can use this result to evaluate the integral </p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2+(x^2-x)}{1+x}\right)\,\frac{1}{\log(x)}\,dx\\\\ &amp;=\int_0^1 \left(\frac{x^{2n+1}-x^2}{1+x}\right)\,\frac{1}{\log(x)}\,dx+\log(4/\pi)\\\\ &amp;=\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\tag1 \end{align}$$</p> <hr> <p>The integral on the right-hand side of $(1)$ can be evaluated by enforcing the substitution $x\to e^{-x}$ and evaluating the resulting <a href="https://math.stackexchange.com/questions/1807410/frullani-s-theorem-in-a-complex-context/1808847#1808847">FRULLANI INTEGRAL</a>. Proceeding, we have </p> <p>$$\int_0^1 \frac{x^{2n}-x}{\log(x)}\,dx=\log\left(\frac{2n+1}{2}\right) \tag 2$$</p> <hr> <p>Substituting $(2)$ into $(1)$ reveals</p> <p>$$\begin{align} \int_0^1 \left(\frac{x^{2n+1}-x}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\log\left(\frac{2n+1}{2}\right)+\log(4/\pi)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right)\\\\ &amp;= \log\left(\frac{2}{\pi}(2n+1)\right)-\log\left(\frac{2}{\pi}\frac{(2n)!!}{(2n-1)!!}\right) \tag 3\\\\ &amp;=\log\left(\frac{(2n+1)!!}{(2n)!!}\right) \end{align}$$</p> <hr> <p>Finally, we see that </p> <p>$$\begin{align} \int_0^1 \left(\frac{x+x^2+\cdots +x^{2n}-2nx}{1+x}\right)\,\frac{1}{\log(x)}\,dx&amp;=\sum_{k=1}^n \int_0^1 \frac{x^{2k}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;+\sum_{k=1}^{n-1}\int_0^1 \frac{x^{2k+1}-x}{1+x}\frac{1}{\log(x)}\,dx\\\\ &amp;=\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;+\sum_{k=1}^{n-1}\log\left(\frac{2}{\pi}(2n+1)\right)\\\\ &amp;-\sum_{k=1}^n\log\left(\frac{2}{\pi}\frac{(2k)!!}{(2k-1)!!}\right)\\\\ &amp;=\log\left(\left(\frac{2}{\pi}\right)^n (2n)!!\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 60: <p>Hint: We may use the substitution $x\to e^{-x};$ $$I(n) = \int_0^\infty \left( \frac{e^{-2(n+1)t}-e^{-2t}}{1-e^{-2t}} +\frac{2ne^{-2t}}{1+e^{-t}}\right)t^{-1}dt\\ = \sum_{k=0}^\infty \int_0^\infty\left(e^{-2(n+k+1)t} - e^{-2(k+1)t} +2n (-1)^k e^{-(k+2)t}\right) t^{-1}dt$$ and write the previous splitting in terms of $\Gamma-$function </p>
How to calculate the rank of a matrix?
2,080,554
11
https://math.stackexchange.com/questions/2080554/how-to-calculate-the-rank-of-a-matrix
<blockquote> <p>I need to calculate the rank of the matrix $A$, shown below: $$ A= \begin{bmatrix} 3 &amp; 2 &amp; -1\\ 2 &amp; -3 &amp; -5\\ -1 &amp; -4 &amp;- 3 \end{bmatrix} $$</p> </blockquote> <hr> <p>I know that I need to calculate $\det(A)$ and if $\det(A) \neq 0$ then the rank will be equal to $3$, but in this case I'm required to zero-out first column of matrix $A$ using element $a_{31} = -1$. </p>
[ { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" }, { "answer_id": 2080577, "score": 15, "text": "<p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction:\n\\begin{align}\n&amp;\\begin{bmatrix}\n3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\\\\-1&amp;-4&amp;-3\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\3&amp;2&amp;-1\\\\2&amp;-3&amp;-5\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;-10&amp;-10\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\\\[1ex]\n\\rightsquigarrow&amp;\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;-11&amp;-11\n\\end{bmatrix}\\rightsquigarrow\n\\begin{bmatrix}\n1&amp;4&amp;3\\\\0&amp;1&amp;1\\\\0&amp;0&amp;0\n\\end{bmatrix}\n\\end{align}\nThus, the rank is $2$.</p>\n\n<p>Note that it was obvious after the second step. </p>\n" }, { "answer_id": 2080560, "score": 13, "text": "<p>The second column - first column is the last column so the rank is $&lt;3$.</p>\n\n<p>The first two colums are linearly independent so the rank is $2$.</p>\n" }, { "answer_id": 2080559, "score": 11, "text": "<p>The key word is <a href=\"https://en.wikipedia.org/wiki/Row_echelon_form\" rel=\"noreferrer\">row echelon form</a>.</p>\n" }, { "answer_id": 2080561, "score": 2, "text": "<p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>\n" } ]
Question: How to calculate the rank of a matrix? Answer 1: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 2: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 3: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 4: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 5: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 6: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 7: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 8: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 9: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 10: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 11: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 12: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 13: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 14: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 15: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 16: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 17: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 18: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 19: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 20: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 21: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 22: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 23: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 24: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 25: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 26: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 27: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 28: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 29: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 30: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 31: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 32: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 33: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 34: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 35: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 36: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 37: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 38: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 39: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 40: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 41: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 42: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 43: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 44: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 45: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 46: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 47: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 48: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 49: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 50: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 51: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 52: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 53: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 54: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 55: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 56: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 57: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 58: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 59: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 60: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 61: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 62: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 63: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 64: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 65: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 66: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 67: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 68: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 69: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 70: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 71: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 72: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 73: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 74: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 75: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 76: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p> Answer 77: <p>Simply use <code>row reduction</code>: the rank is the number of non-zero rows after you've performed row reduction: \begin{align} &amp;\begin{bmatrix} 3&amp;2&amp;-1\\2&amp;-3&amp;-5\\-1&amp;-4&amp;-3 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\3&amp;2&amp;-1\\2&amp;-3&amp;-5 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;-10&amp;-10\\0&amp;-11&amp;-11 \end{bmatrix}\\[1ex] \rightsquigarrow&amp;\begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;-11&amp;-11 \end{bmatrix}\rightsquigarrow \begin{bmatrix} 1&amp;4&amp;3\\0&amp;1&amp;1\\0&amp;0&amp;0 \end{bmatrix} \end{align} Thus, the rank is $2$.</p> <p>Note that it was obvious after the second step. </p> Answer 78: <p>The second column - first column is the last column so the rank is $&lt;3$.</p> <p>The first two colums are linearly independent so the rank is $2$.</p> Answer 79: <p>The key word is <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">row echelon form</a>.</p> Answer 80: <p>You can apply linear transformations to $A$ and find an upper triangular matrix. The number of non-zero lines of that matrix will give you the rank of $A$</p>
Midpoint polygons?
2,080,771
11
https://math.stackexchange.com/questions/2080771/midpoint-polygons
<p>A midpoint polygon is formed by taking the midpoints of each side of a polygon, and making a new polygon out of those points. The end result is the midpoint polygon inscribed in the polygon you started off with. </p> <p>Given an arbitrary polygon, is it always the midpoint polygon of some other polygon? And if so, is this other polygon unique? Is there a simple construction to find it? And also, are there any special cases of the original question which are easier to answer?</p>
[ { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" }, { "answer_id": 2080790, "score": 17, "text": "<p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p>\n\n<p>EDIT:</p>\n\n<p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\\ldots(a_n,b_n)$ having midpoints as the original polygon:</p>\n\n<p>$x_1=\\frac{a+a_1}{2}$, $y_1=\\frac{b+b_1}{2}$ so</p>\n\n<p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p>\n\n<p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p>\n\n<p>$\\ldots$</p>\n\n<p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p>\n\n<p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p>\n\n<p>So for $n$ odd you get an unique solution:</p>\n\n<p>$a=x_1-x_2+x_3-\\ldots+(-1)^{n+1}x_n$</p>\n\n<p>$b=y_1-y_2+y_3-\\ldots+(-1)^{n+1}y_n$</p>\n\n<p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href=\"https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem\" rel=\"nofollow noreferrer\">Rouche's theorem</a></p>\n\n<p>So when $n$ is even, the system has solution iff:</p>\n\n<p>$x_1-x_2+x_3-\\ldots-x_n=0$</p>\n\n<p>$y_1-y_2+y_3-\\ldots-y_n=0$</p>\n\n<p>In which case it has an infinity of solutions, starting with any point of the plane.</p>\n\n<p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p>\n" }, { "answer_id": 2081060, "score": 7, "text": "<p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href=\"https://en.wikipedia.org/wiki/Circulant_matrix\" rel=\"nofollow noreferrer\">circulant matrix</a> whose first row is $1/2,1/2,0,\\dots,0$.</p>\n\n<p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p>\n\n<p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p>\n" }, { "answer_id": 2081342, "score": 1, "text": "<p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\\dots Y_n$ the <em>dual</em> of $X_1\\dots X_n$, and $X_1 \\dots X_n$ a <em>predual</em> of $Y_1\\dots Y_n$. The following results completely characterize when a polygon $Y_1 \\dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p>\n\n<p>If $n$ is odd, then any polygon $Y_1 \\dots Y_n$ has a unique predual.</p>\n\n<p>If $n$ is even, things are much more complicated. Let $\\vec{e_1}, \\vec{e_2}, \\dots \\vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\\vec{e_1} = \\vec{Y_1Y_2},\\vec{e_2}=\\vec{Y_2Y_3},\\dots \\vec{e_n}=\\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\\vec{e_1}+\\vec{e_2}+\\cdots+\\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\\vec{e_1}+\\vec{e_3}+\\cdots+\\vec{e_{n-1}}=0$ and $\\vec{e_2}+\\vec{e_4}+\\cdots+\\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p>\n\n<p>In the case $n=4$, the condition above reduces to the requirement that $\\vec{e_1}=-\\vec{e_3}$ and $\\vec{e_2}=-\\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p>\n\n<p>Regarding uniqueness, form any two polygons $X_1X_3\\dots X_{n-1}$ and $X_2X_4\\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\\dots X_n'$ be any translation of $X_2X_4\\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\\dots X_{n-1}X_n$ and $X_1X_2'X_3\\dots X_{n-1}X_n'$ have congruent duals.</p>\n\n<p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons \"float\" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>\n" } ]
Question: Midpoint polygons? Answer 1: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 2: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 3: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 4: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 5: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 6: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 7: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 8: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 9: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 10: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 11: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 12: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 13: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 14: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 15: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 16: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 17: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 18: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 19: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 20: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 21: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 22: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 23: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 24: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 25: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 26: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 27: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 28: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 29: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 30: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 31: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 32: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 33: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 34: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 35: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 36: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 37: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 38: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 39: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 40: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 41: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 42: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 43: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 44: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 45: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 46: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 47: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 48: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 49: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 50: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 51: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 52: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 53: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 54: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 55: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 56: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 57: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p> Answer 58: <p>Note that the midpoints of a quadrilateral form a parallelogram. So if a quadrilateral is not parallelogram it cannot be the midpoint polygon of another quadrilateral.</p> <p>EDIT:</p> <p>Let the vertices of the polygon be of coordinates $(x_1,y_1)\ldots(x_n,y_n)$ Suppose that we start from $(a,b)$ and we incrementally construct the broken line $(a,b),(a_1,b_1)\ldots(a_n,b_n)$ having midpoints as the original polygon:</p> <p>$x_1=\frac{a+a_1}{2}$, $y_1=\frac{b+b_1}{2}$ so</p> <p>$a_1=-a+2x_1$, $b_1=-b+2y_1$</p> <p>$a_2=-a_1+2x_2$, $b_2=-b_1+2y_2$</p> <p>$\ldots$</p> <p>$a_n=-a_{n-1}+2x_n$, $b_n=-b_{n-1}+2y_n$</p> <p>To have a closed polygon one needs $a_n=a$ and $b_n=b$</p> <p>So for $n$ odd you get an unique solution:</p> <p>$a=x_1-x_2+x_3-\ldots+(-1)^{n+1}x_n$</p> <p>$b=y_1-y_2+y_3-\ldots+(-1)^{n+1}y_n$</p> <p>For $n$ even, the two linear systems are not of full rank (have either an infinity of zero solutions). You may check when each of the case happens with <a href="https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli_theorem" rel="nofollow noreferrer">Rouche's theorem</a></p> <p>So when $n$ is even, the system has solution iff:</p> <p>$x_1-x_2+x_3-\ldots-x_n=0$</p> <p>$y_1-y_2+y_3-\ldots-y_n=0$</p> <p>In which case it has an infinity of solutions, starting with any point of the plane.</p> <p>Of course depending on your unspecified requirements on the polygon (non-degenerate, non-self-intersect, convex), you might have to impose additional restrictions.</p> Answer 59: <p>The midpoint transformation can be written in matrix form $Q=AP$, where $Q$ are the new vertices, $P$ are the old vertices, and the matrix $A$ is a <a href="https://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow noreferrer">circulant matrix</a> whose first row is $1/2,1/2,0,\dots,0$.</p> <p>The relevant property is that the rank of $A$ is $n-d$, where $d$ is the degree of $\gcd( x^{n-1}+1, x^n - 1)$. When $n$ is even, this gcd has at least the factor $x+1$ and so $A$ is singular and the midpoint transformation cannot be inverted.</p> <p>The gcd is actually a divisor of $x+1= x(x^{n-1}+1)-(x^n - 1)$ and so is either $1$ or $x+1$. Therefore, $A$ is singular iff $n$ is even.</p> Answer 60: <p>For brevity, let's say that if $X_1X_2X_3...X_n$ is a polygon, and $Y_1Y_2Y_3...Y_n$ its midpoint polygon, we will call $Y_1\dots Y_n$ the <em>dual</em> of $X_1\dots X_n$, and $X_1 \dots X_n$ a <em>predual</em> of $Y_1\dots Y_n$. The following results completely characterize when a polygon $Y_1 \dots Y_n$ has a predual, when that predual is unique, and when two different (i.e. noncongruent) polygons have the same (i.e. congruent) duals.</p> <p>If $n$ is odd, then any polygon $Y_1 \dots Y_n$ has a unique predual.</p> <p>If $n$ is even, things are much more complicated. Let $\vec{e_1}, \vec{e_2}, \dots \vec{e_n}$ denote the vectors that point from each vertex to the next; that is, $\vec{e_1} = \vec{Y_1Y_2},\vec{e_2}=\vec{Y_2Y_3},\dots \vec{e_n}=\vec{Y_nY_1}$. Then for any polygon it is certainly true that $\vec{e_1}+\vec{e_2}+\cdots+\vec{e_n}=0$. The following condition is necessary and sufficient to ensure that the polygon has a predual: we require that $\vec{e_1}+\vec{e_3}+\cdots+\vec{e_{n-1}}=0$ and $\vec{e_2}+\vec{e_4}+\cdots+\vec{e_{n}}=0$. In other words, the <em>odd-numbered edges</em> and the <em>even-numbered edges</em> separately add together to form two polygons.</p> <p>In the case $n=4$, the condition above reduces to the requirement that $\vec{e_1}=-\vec{e_3}$ and $\vec{e_2}=-\vec{e_4}$, i.e., that the quadrilateral is a parallelogram.</p> <p>Regarding uniqueness, form any two polygons $X_1X_3\dots X_{n-1}$ and $X_2X_4\dots X_n$ with an equal number of vertices. Let $X_2'X_4'\dots X_n'$ be any translation of $X_2X_4\dots X_n$ in the plane. Then the two polygons $X_1X_2X_3\dots X_{n-1}X_n$ and $X_1X_2'X_3\dots X_{n-1}X_n'$ have congruent duals.</p> <p>Put another way: decompose any $2n$-gon into two $n$-gons by joining every other vertex. Then let those two $n$-gons "float" in the plane relative to one another. As they take different positions, the original $2n$-gon deforms into different shapes, but all of those different $2n$-gons have congruent duals. This is the <em>only</em> way that two different polygons can have the same dual.</p>
How to solve this series: $\sum_{k=0}^{2n+1} (-1)^kk $
2,082,210
11
https://math.stackexchange.com/questions/2082210/how-to-solve-this-series-sum-k-02n1-1kk
<p>$$\sum_{k=0}^{2n+1} (-1)^kk $$</p> <p>The answer given is $-n-1$. I have searched for how to do it, but I have problems simplifying the sum and solving it. How do you go about solving this?</p>
[ { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" }, { "answer_id": 2082212, "score": 18, "text": "<p>Group the terms together in pairs:\n$$\\sum_{k=0}^{2n+1} (-1)^kk = \\sum_{k=0}^{n} (2k - (2k+1)) = \\sum_{k=0}^{n} (-1) = -(n+1)$$</p>\n" }, { "answer_id": 2082214, "score": 16, "text": "<p>Just write it out:\n\\begin{aligned}\n\\sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\\cdots+2n-(2n+1)\\\\\n&amp;=(0-1)+(2-3)+\\cdots+[2n-(2n+1)]\\\\\n&amp;=(-1)+(-1)+\\cdots+(-1)\\\\\n&amp;=(n+1)(-1)\\\\\n&amp;=-n-1.\n\\end{aligned}</p>\n" }, { "answer_id": 2082225, "score": 9, "text": "<p>Split it into two series:</p>\n\n<p>\\begin{align*}\n\\sum\\limits_{k=0}^{2n+1}(-1)^kk\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k+1\\right)\\\\\n&amp;=\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}2k\\right)-\\left(\\sum\\limits_{k=0}^{n}1\\right)\\\\\n&amp;=-\\sum\\limits_{k=0}^{n}1\\\\\n&amp;=-(n+1)\\,.\n\\end{align*}</p>\n" }, { "answer_id": 2082224, "score": 6, "text": "<p>You have:</p>\n\n<p>$$\n\\begin{align}\n\\sum_{k=0}^{2n+1} (-1)^k k &amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} (2k+1) \\\\\n&amp;= \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 2k - \\sum_{k=0}^{n} 1 \\\\\n&amp;= - \\sum_{k=0}^{n} 1 \\\\\n&amp;= -(n+1)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083058, "score": 1, "text": "<p>Another approach is to exploit the sum $$\\sum_{k=0}^{2n+1}x^{k}=\\frac{1-x^{2n+2}}{1-x}.\n $$ Taking the derivative we have $$\\sum_{k=0}^{2n+1}kx^{k}=\\sum_{k=1}^{2n+1}kx^{k}=x\\frac{-\\left(2n+2\\right)x^{2n+1}\\left(1-x\\right)+1-x^{2n+2}}{\\left(1-x\\right)^{2}}\n $$ hence taking $x=-1\n $ we get $$\\sum_{k=0}^{2n+1}k\\left(-1\\right)^{k}=\\color{red}{-\\left(n+1\\right)}$$ as wanted.</p>\n" } ]
Question: How to solve this series: $\sum_{k=0}^{2n+1} (-1)^kk $ Answer 1: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 2: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 3: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 4: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 5: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 6: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 7: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 8: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 9: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 10: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 11: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 12: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 13: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 14: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 15: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 16: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 17: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 18: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 19: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 20: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 21: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 22: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 23: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 24: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 25: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 26: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 27: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 28: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 29: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 30: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 31: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 32: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 33: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 34: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 35: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 36: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 37: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 38: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 39: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 40: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 41: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 42: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 43: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 44: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 45: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 46: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 47: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 48: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 49: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 50: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 51: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 52: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 53: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 54: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 55: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 56: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 57: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 58: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 59: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 60: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 61: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 62: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 63: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 64: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 65: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 66: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 67: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 68: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 69: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 70: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 71: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 72: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 73: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 74: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 75: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 76: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 77: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 78: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 79: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 80: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 81: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 82: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 83: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 84: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 85: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 86: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 87: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 88: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 89: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 90: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 91: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 92: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 93: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 94: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 95: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p> Answer 96: <p>Group the terms together in pairs: $$\sum_{k=0}^{2n+1} (-1)^kk = \sum_{k=0}^{n} (2k - (2k+1)) = \sum_{k=0}^{n} (-1) = -(n+1)$$</p> Answer 97: <p>Just write it out: \begin{aligned} \sum_{k=0}^{2n+1} (-1)^kk&amp;=0-1+2-3+\cdots+2n-(2n+1)\\ &amp;=(0-1)+(2-3)+\cdots+[2n-(2n+1)]\\ &amp;=(-1)+(-1)+\cdots+(-1)\\ &amp;=(n+1)(-1)\\ &amp;=-n-1. \end{aligned}</p> Answer 98: <p>Split it into two series:</p> <p>\begin{align*} \sum\limits_{k=0}^{2n+1}(-1)^kk &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k+1\right)\\ &amp;=\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}2k\right)-\left(\sum\limits_{k=0}^{n}1\right)\\ &amp;=-\sum\limits_{k=0}^{n}1\\ &amp;=-(n+1)\,. \end{align*}</p> Answer 99: <p>You have:</p> <p>$$ \begin{align} \sum_{k=0}^{2n+1} (-1)^k k &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} (2k+1) \\ &amp;= \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 2k - \sum_{k=0}^{n} 1 \\ &amp;= - \sum_{k=0}^{n} 1 \\ &amp;= -(n+1) \end{align} $$</p> Answer 100: <p>Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}. $$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}} $$ hence taking $x=-1 $ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.</p>
Where is the mass of a hypercube?
2,085,082
11
https://math.stackexchange.com/questions/2085082/where-is-the-mass-of-a-hypercube
<p><strong>Question</strong>: </p> <blockquote> <p>Consider a hypercube $C := \{\vec{x}\in\mathbb{R}^n : \forall(i)\: |x_i|\leq 1\}$ and hypersphere $S_r := \{\vec{x}\in\mathbb{R}^n : |\vec{x}| = r\}$. Let $R(n)$ be the radius of the sphere for which the intersection $C\cap S_{R(n)}$ has maximal area. What is the asymptotic behavior of $R(n)$ as $n\rightarrow \infty$?</p> </blockquote> <hr> <p><strong>Background</strong>: </p> <p>For an n dimensional ball, the mass on a shell of radius $r$ grows like $r^{n-1}$ right up to the boundary of the ball. Knowing where the mass is located on a high dimensional object is useful for e.g. statistical mechanics. </p> <p>By comparison to the ball case, you might expect that the mass of a cube would also be near the farthest extremity, which for the case of $C$, above, would be at the vertices, distance $\sqrt{n}$ from the center. However, there is an argument why there is not much mass near the vertices: Consider centering little balls on each of the vertices and looking at the volume of the intersection of the balls with the cube (see figure). Each ball intersects the cube only in a sector of $1/2^n$ of the ball, and there are $2^n$ vertices, so the total volume of the balls' intersections is just the volume of one ball. Thus there is the same volume in a neighborhood of the vertices as there is in a neighborhood of the center of the cube (or any other interior point)!</p> <p><a href="https://i.stack.imgur.com/GESfS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GESfS.png" alt="Cube"></a></p> <p>Hence the above question: What shell of the cube has the most mass? It's easy to see that $1&lt;R(n)&lt;\sqrt{n}$. Can you do better?</p>
[ { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" }, { "answer_id": 2646392, "score": 4, "text": "<p><strong>Answer:</strong> $R(n)\\rightarrow \\sqrt{\\frac{n}{3}}$ for large $n$.</p>\n\n<p><strong>Explanation:</strong> </p>\n\n<p>As shown in <a href=\"https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables\">this post</a>, for IID random variables $X_1, X_2,\\dots,X_n$ with mean $\\mu$ and variance $\\sigma^2$, the square root of their sum $Y := \\sqrt{\\sum_i X_i}$ becomes Gaussian with mean $\\sqrt{n\\mu}$ and variance $\\frac{\\sigma^2}{4\\sqrt{\\mu}}$ as $n\\rightarrow \\infty$. </p>\n\n<p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\\mu=\\left&lt;x^2\\right&gt; = \\frac{1}{2}\\int_{-1}^1 x^2 dx = \\frac{1}{3}$$</p>\n\n<p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\\sqrt{\\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p>\n" }, { "answer_id": 2649371, "score": 4, "text": "<p>An alternate derivation, similar to Yly's :</p>\n\n<p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p>\n\n<p>Now, if the coordinates are $X_1, X_2 \\cdots X_n$, then we have $X_i \\sim \\mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\\frac{\\sum_{i=1}^n Y_i}{d}$ (so that $0\\le Z \\le 1$) we get $E[Y_i]= \\frac13$, $Var(Y_i)=\\frac{4}{45}$, so by the CLT $$Z\\to \\mathcal{N}(\\frac13, \\frac{4}{ 45d }) $$</p>\n\n<p>Then $Z$ will be highly concentrated around $Z=\\frac13$, which means $R(n)\\to \\sqrt \\frac{n}{3}$</p>\n" } ]
Question: Where is the mass of a hypercube? Answer 1: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 2: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 3: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 4: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 5: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 6: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 7: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 8: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 9: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 10: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 11: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 12: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 13: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 14: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 15: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 16: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 17: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 18: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 19: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 20: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 21: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 22: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 23: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 24: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 25: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 26: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 27: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 28: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 29: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 30: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 31: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 32: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 33: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 34: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 35: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 36: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 37: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 38: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p> Answer 39: <p><strong>Answer:</strong> $R(n)\rightarrow \sqrt{\frac{n}{3}}$ for large $n$.</p> <p><strong>Explanation:</strong> </p> <p>As shown in <a href="https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables">this post</a>, for IID random variables $X_1, X_2,\dots,X_n$ with mean $\mu$ and variance $\sigma^2$, the square root of their sum $Y := \sqrt{\sum_i X_i}$ becomes Gaussian with mean $\sqrt{n\mu}$ and variance $\frac{\sigma^2}{4\sqrt{\mu}}$ as $n\rightarrow \infty$. </p> <p>In a $n$-cube, the radius squared of any point is given by the sum of the $n$ IID squared coordinates, which have mean $$\mu=\left&lt;x^2\right&gt; = \frac{1}{2}\int_{-1}^1 x^2 dx = \frac{1}{3}$$</p> <p>Hence for large $n$, the radial mass distribution of an $n$-cube is Gaussian, with mean $\sqrt{\frac{n}{3}}$. The shell with maximal area is the peak of this Gaussian, which is precisely the mean.</p> Answer 40: <p>An alternate derivation, similar to Yly's :</p> <p>Imagine we throw points with uniform probability inside the hypercube. We want to get the radius with higher density of points, i.e. the mode of the radial probability density function.</p> <p>Now, if the coordinates are $X_1, X_2 \cdots X_n$, then we have $X_i \sim \mathcal U_{[-1,1]}$. Letting $Y_i = X_i^2$ and normalizing the sum $Z=\frac{\sum_{i=1}^n Y_i}{d}$ (so that $0\le Z \le 1$) we get $E[Y_i]= \frac13$, $Var(Y_i)=\frac{4}{45}$, so by the CLT $$Z\to \mathcal{N}(\frac13, \frac{4}{ 45d }) $$</p> <p>Then $Z$ will be highly concentrated around $Z=\frac13$, which means $R(n)\to \sqrt \frac{n}{3}$</p>
Intuition behind $\zeta(-1)$ = $\frac{-1}{12}$
2,086,035
11
https://math.stackexchange.com/questions/2086035/intuition-behind-zeta-1-frac-112
<p>When I first watched numberphile's <a href="https://www.youtube.com/watch?v=w-I6XTVZXww" rel="noreferrer">1+2+3+... = $\frac{-1}{12}$</a> I thought the sum actually <strong>equalled</strong> $\frac{-1}{12}$ without really understanding it.</p> <p>Recently I read some wolframalpha pages and watched some videos and now I understand (I think), that $\frac{-1}{12}$ is just an associative value to the sum of all natural numbers when you analytically continue the riemann-zeta function. 3Blue1Brown's <a href="https://www.youtube.com/watch?v=sD0NjbwqlYw&amp;list=PLs2m_DYYX0ZDtE0VXTYVJFMxm54W-vc6V" rel="noreferrer">video</a> really helped. What I don't really understand is why it gives the value $\frac{-1}{12}$ specifically. The value $\frac{-1}{12}$ seems arbitrary to me and I don't see any connection to the sum of all natural numbers. Is there any intuition behind why you get $\frac{-1}{12}$ when analytically continue the zeta function at $\zeta(-1)$?</p> <p><strong>EDIT</strong>(just to make my question a little clearer): I'll use an example here. Suppose you somehow didn't know about radians and never associated trig functions like sine to $\pi$ but you knew about maclaurin expansion. By plugging in x=$\pi$ to the series expansion of sine, you would get sine($\pi$) = 0. You might have understood the process in which you get the value 0, the maclaurin expansion, but you wouldn't really know the intuition behind this connection between $\pi$ and trig functions, namely the unit circle, which is essential in almost every branch of number theory.</p> <p>Back to this question, I understand the analytic continuation of the zeta function and its continued form for $s &lt; 0$ $$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$ and how when you plug in s = -1, things simplify down to $\frac{-1}{12}$ but I don't see any connection between the fraction and the infinite sum. I'm sure there is a beautiful connection between them, like the one between trig functions and $\pi$, but couldn't find any useful resources on the internet. Hope this clarified things.</p>
[ { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" }, { "answer_id": 2086052, "score": 12, "text": "<p>There is a really nice answer by master Terence Tao on his blog.</p>\n\n<p><a href=\"https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/\" rel=\"noreferrer\">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p>\n\n<p>It shows that smoothed sums $\\eta$ for $\\sum\\limits_{n\\le N}n^s\\,\\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\\frac{B_{s+1}}{s+1}$.</p>\n\n<p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p>\n" }, { "answer_id": 2086076, "score": 6, "text": "<p>The following is taken from <a href=\"https://math.stackexchange.com/a/632795/13854\">this answer</a>.</p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Dirichlet_eta_function\" rel=\"nofollow noreferrer\">Dirichlet Eta function</a> and integration by parts twice, we get\n$$\n\\begin{align}\n(1-2^{1-z})\\zeta(z)\\Gamma(z)\n&amp;=\\eta(z)\\Gamma(z)\\\\\n&amp;=\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1z\\int_0^\\infty\\frac{x^ze^x}{\\left(e^x+1\\right)^2}\\,\\mathrm{d}x\\\\\n&amp;=\\frac1{z(z+1)}\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\\\\\n\\end{align}\n$$\nMultiply by $z(x+1)$ to get\n$$\n(1-2^{1-z})\\zeta(z)\\Gamma(z+2)=\\int_0^\\infty\\frac{x^{z+1}\\left(e^{2x}-e^x\\right)}{\\left(e^x+1\\right)^3}\\,\\mathrm{d}x\n$$\nPlugging in $z=-1$, gives a pretty simple integral.\n$$\n\\begin{align}\n(1-2^2)\\zeta(-1)\\Gamma(1)\n&amp;=\\int_0^\\infty\\frac{e^{2x}-e^x}{(e^x+1)^3}\\mathrm{d}x\\\\\n&amp;=\\int_1^\\infty\\frac{u-1}{(u+1)^3}\\mathrm{d}u\\\\\n&amp;=\\int_1^\\infty\\left(\\frac1{(u+1)^2}-\\frac2{(u+1)^3}\\right)\\mathrm{d}u\\\\\n&amp;=\\frac14\n\\end{align}\n$$\nThis gives\n$$\n\\bbox[5px,border:2px solid #C0A000]{\\zeta(-1)=-\\frac1{12}}\n$$</p>\n\n<hr>\n\n<p><strong>The relation between $\\boldsymbol{\\zeta(z)}$ and $\\boldsymbol{\\eta(z)}$</strong></p>\n\n<p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms.\n$$\n\\begin{align}\n\\eta(z)\n&amp;=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}}{n^z}\\\\\n&amp;=\\sum_{n=1}^\\infty\\frac1{n^z}-2\\sum_{n=1}^\\infty\\frac1{(2n)^z}\\\\\n&amp;=\\left(1-2^{1-z}\\right)\\sum_{n=1}^\\infty\\frac1{n^z}\\\\[6pt]\n&amp;=\\left(1-2^{1-z}\\right)\\zeta(z)\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>The integral for $\\boldsymbol{\\eta(z)\\Gamma(z)}$</strong>\n$$\n\\begin{align}\n\\int_0^\\infty\\frac{x^{z-1}}{e^x+1}\\,\\mathrm{d}x\n&amp;=\\int_0^\\infty x^{z-1}\\sum_{k=1}^\\infty(-1)^{k-1}e^{-kx}\\,\\mathrm{d}x\\\\\n&amp;=\\sum_{k=1}^\\infty\\frac{(-1)^{k-1}}{k^z}\\int_0^\\infty x^{z-1}e^{-x}\\,\\mathrm{d}x\\\\[6pt]\n&amp;=\\eta(z)\\Gamma(z)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086323, "score": 6, "text": "<p>In equation $(10)$ of <a href=\"https://math.stackexchange.com/a/2027588/13854\">this answer</a>, it is shown, using the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\\newcommand{\\Re}{\\operatorname{Re}}\\Re(z)\\gt-3$ is given by\n$$\n\\zeta(z)=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n{k^{-z}}-\\frac1{1-z}n^{1-z}-\\frac12n^{-z}+\\frac{z}{12}n^{-1-z}\\right]\\tag{1}\n$$\nNote that for $\\Re(z)\\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\\zeta(z)$:\n$$\n\\zeta(z)=\\sum_{n=1}^\\infty n^{-z}\\tag{2}\n$$</p>\n\n<blockquote>\n <p>For $z=-1$, $(1)$ becomes\n $$\n\\begin{align}\n\\zeta(-1)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{n^2+n}2-\\frac12n^2-\\frac12n-\\frac1{12}\\right]\\\\[3pt]\n&amp;=-\\frac1{12}\\tag{3}\n\\end{align}\n$$</p>\n</blockquote>\n\n<p>Furthermore, for $z=0$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(0)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^n1-n-\\frac12+\\frac0{12n}\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[n-n-\\frac12+\\frac0{12n}\\right]\\\\[3pt]\n&amp;=-\\frac12\\tag{4}\n\\end{align}\n$$\nand for $z=-2$, $(1)$ becomes\n$$\n\\begin{align}\n\\zeta(-2)\n&amp;=\\lim_{n\\to\\infty}\\left[\\sum_{k=1}^nk^2-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\\n&amp;=\\lim_{n\\to\\infty}\\left[\\frac{2n^3+3n^2+n}6-\\frac13n^3-\\frac12n^2-\\frac16n\\right]\\\\[9pt]\n&amp;=0\\tag{5}\n\\end{align}\n$$</p>\n" }, { "answer_id": 2086042, "score": 5, "text": "<p>We have the functional equation for $\\;\\zeta\\;$ :</p>\n\n<p>$$\\zeta(s)=2^s\\pi^{s-1}\\sin\\frac{\\pi s}2\\Gamma(1-s)\\zeta(1-s)$$</p>\n\n<p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\\;\\text{Re}\\,s&lt;1\\;$, and then:</p>\n\n<p>$$\\zeta(-1)=\\frac1{2\\pi^2}\\cdot(-1)\\cdot1\\cdot\\frac{\\pi^2}6=-\\frac1{12}$$</p>\n" }, { "answer_id": 2086075, "score": 4, "text": "<p><strong>An Elementary Non-Proof</strong></p>\n\n<p>Note that $\\dfrac{1}{(1-z)^2}=\\sum\\limits_{k=0}^\\infty\\,(k+1)\\,z^{k}$ leads to $$\\beta = 1-2+3-4+\\ldots=\\frac{1}{\\big(1-(-1)\\big)^2}=\\frac{1}{4}\\,.$$\nHence, if $\\alpha =1+2+3+\\ldots$, then $$\\alpha-\\beta =4+8+12+\\ldots=4\\,(1+2+3+\\ldots)=4\\,\\alpha \\,.$$\nThus, $$\\zeta(-1)=\\alpha=-\\frac{\\beta}{3}=-\\frac{1}{12}\\,.$$ </p>\n\n<p>Hope it helps. </p>\n" }, { "answer_id": 2086090, "score": 2, "text": "<p>$$1+1+1+\\dots+1=n$$</p>\n\n<p>$$\\int_{-1}^0x\\ dx=-\\frac12$$</p>\n\n<hr>\n\n<p>$$1+2+3+\\dots+n=\\frac{n(n+1)}2$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)}2\\ dx=-\\frac1{12}$$</p>\n\n<hr>\n\n<p>$$1^2+2^2+3^2+\\dots+n^2=\\frac{n(n+1)(2n+1)}6$$</p>\n\n<p>$$\\int_{-1}^0\\frac{x(x+1)(2x+1)}6\\ dx=0$$</p>\n\n<hr>\n\n<p><a href=\"https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers\">Integrating the formula for the sum of all natural numbers</a></p>\n" } ]
Question: Intuition behind $\zeta(-1)$ = $\frac{-1}{12}$ Answer 1: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 2: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 3: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 4: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 5: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 6: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 7: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 8: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 9: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 10: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 11: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 12: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 13: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 14: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 15: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 16: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 17: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 18: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 19: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 20: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 21: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 22: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 23: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 24: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 25: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 26: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 27: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 28: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 29: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 30: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 31: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 32: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 33: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 34: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 35: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 36: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 37: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 38: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 39: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 40: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 41: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 42: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 43: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 44: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 45: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 46: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 47: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 48: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 49: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 50: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 51: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 52: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 53: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 54: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 55: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 56: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 57: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 58: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 59: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 60: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 61: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 62: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 63: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 64: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 65: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 66: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 67: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 68: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 69: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 70: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 71: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 72: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 73: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 74: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 75: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 76: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 77: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 78: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 79: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 80: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 81: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 82: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 83: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 84: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 85: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 86: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 87: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 88: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 89: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 90: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 91: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 92: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 93: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 94: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 95: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 96: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 97: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 98: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 99: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 100: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 101: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 102: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 103: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 104: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 105: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 106: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 107: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 108: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 109: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 110: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 111: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 112: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 113: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 114: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p> Answer 115: <p>There is a really nice answer by master Terence Tao on his blog.</p> <p><a href="https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="noreferrer">The Euler-Maclaurin formula, Bernoulli numbers, The zeta function and real variable analytic continuation/</a></p> <p>It shows that smoothed sums $\eta$ for $\sum\limits_{n\le N}n^s\,\eta(n/N)$ have a divergent part in $N^{s+1}$ and a convergent part $-\frac{B_{s+1}}{s+1}$.</p> <p>The second part of the paper shows how it is related to analytics continuation in the complex plane.</p> Answer 116: <p>The following is taken from <a href="https://math.stackexchange.com/a/632795/13854">this answer</a>.</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Dirichlet_eta_function" rel="nofollow noreferrer">Dirichlet Eta function</a> and integration by parts twice, we get $$ \begin{align} (1-2^{1-z})\zeta(z)\Gamma(z) &amp;=\eta(z)\Gamma(z)\\ &amp;=\int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x\\ &amp;=\frac1z\int_0^\infty\frac{x^ze^x}{\left(e^x+1\right)^2}\,\mathrm{d}x\\ &amp;=\frac1{z(z+1)}\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x\\ \end{align} $$ Multiply by $z(x+1)$ to get $$ (1-2^{1-z})\zeta(z)\Gamma(z+2)=\int_0^\infty\frac{x^{z+1}\left(e^{2x}-e^x\right)}{\left(e^x+1\right)^3}\,\mathrm{d}x $$ Plugging in $z=-1$, gives a pretty simple integral. $$ \begin{align} (1-2^2)\zeta(-1)\Gamma(1) &amp;=\int_0^\infty\frac{e^{2x}-e^x}{(e^x+1)^3}\mathrm{d}x\\ &amp;=\int_1^\infty\frac{u-1}{(u+1)^3}\mathrm{d}u\\ &amp;=\int_1^\infty\left(\frac1{(u+1)^2}-\frac2{(u+1)^3}\right)\mathrm{d}u\\ &amp;=\frac14 \end{align} $$ This gives $$ \bbox[5px,border:2px solid #C0A000]{\zeta(-1)=-\frac1{12}} $$</p> <hr> <p><strong>The relation between $\boldsymbol{\zeta(z)}$ and $\boldsymbol{\eta(z)}$</strong></p> <p>An alternating sum can be viewed as the sum of the non-alternating terms minus twice the sum of the even terms. $$ \begin{align} \eta(z) &amp;=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^z}\\ &amp;=\sum_{n=1}^\infty\frac1{n^z}-2\sum_{n=1}^\infty\frac1{(2n)^z}\\ &amp;=\left(1-2^{1-z}\right)\sum_{n=1}^\infty\frac1{n^z}\\[6pt] &amp;=\left(1-2^{1-z}\right)\zeta(z) \end{align} $$</p> <hr> <p><strong>The integral for $\boldsymbol{\eta(z)\Gamma(z)}$</strong> $$ \begin{align} \int_0^\infty\frac{x^{z-1}}{e^x+1}\,\mathrm{d}x &amp;=\int_0^\infty x^{z-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &amp;=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^z}\int_0^\infty x^{z-1}e^{-x}\,\mathrm{d}x\\[6pt] &amp;=\eta(z)\Gamma(z) \end{align} $$</p> Answer 117: <p>In equation $(10)$ of <a href="https://math.stackexchange.com/a/2027588/13854">this answer</a>, it is shown, using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Sum Formula</a>, that the analytic continuation of the zeta function for $\newcommand{\Re}{\operatorname{Re}}\Re(z)\gt-3$ is given by $$ \zeta(z)=\lim_{n\to\infty}\left[\sum_{k=1}^n{k^{-z}}-\frac1{1-z}n^{1-z}-\frac12n^{-z}+\frac{z}{12}n^{-1-z}\right]\tag{1} $$ Note that for $\Re(z)\gt1$, the terms beyond the sum vanish and we are left with the well-known definition of $\zeta(z)$: $$ \zeta(z)=\sum_{n=1}^\infty n^{-z}\tag{2} $$</p> <blockquote> <p>For $z=-1$, $(1)$ becomes $$ \begin{align} \zeta(-1) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk-\frac12n^2-\frac12n-\frac1{12}\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{n^2+n}2-\frac12n^2-\frac12n-\frac1{12}\right]\\[3pt] &amp;=-\frac1{12}\tag{3} \end{align} $$</p> </blockquote> <p>Furthermore, for $z=0$, $(1)$ becomes $$ \begin{align} \zeta(0) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^n1-n-\frac12+\frac0{12n}\right]\\ &amp;=\lim_{n\to\infty}\left[n-n-\frac12+\frac0{12n}\right]\\[3pt] &amp;=-\frac12\tag{4} \end{align} $$ and for $z=-2$, $(1)$ becomes $$ \begin{align} \zeta(-2) &amp;=\lim_{n\to\infty}\left[\sum_{k=1}^nk^2-\frac13n^3-\frac12n^2-\frac16n\right]\\ &amp;=\lim_{n\to\infty}\left[\frac{2n^3+3n^2+n}6-\frac13n^3-\frac12n^2-\frac16n\right]\\[9pt] &amp;=0\tag{5} \end{align} $$</p> Answer 118: <p>We have the functional equation for $\;\zeta\;$ :</p> <p>$$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$</p> <p>which allows to extend the <em>usual</em> definition of the zeta function as infinite series to $\;\text{Re}\,s&lt;1\;$, and then:</p> <p>$$\zeta(-1)=\frac1{2\pi^2}\cdot(-1)\cdot1\cdot\frac{\pi^2}6=-\frac1{12}$$</p> Answer 119: <p><strong>An Elementary Non-Proof</strong></p> <p>Note that $\dfrac{1}{(1-z)^2}=\sum\limits_{k=0}^\infty\,(k+1)\,z^{k}$ leads to $$\beta = 1-2+3-4+\ldots=\frac{1}{\big(1-(-1)\big)^2}=\frac{1}{4}\,.$$ Hence, if $\alpha =1+2+3+\ldots$, then $$\alpha-\beta =4+8+12+\ldots=4\,(1+2+3+\ldots)=4\,\alpha \,.$$ Thus, $$\zeta(-1)=\alpha=-\frac{\beta}{3}=-\frac{1}{12}\,.$$ </p> <p>Hope it helps. </p> Answer 120: <p>$$1+1+1+\dots+1=n$$</p> <p>$$\int_{-1}^0x\ dx=-\frac12$$</p> <hr> <p>$$1+2+3+\dots+n=\frac{n(n+1)}2$$</p> <p>$$\int_{-1}^0\frac{x(x+1)}2\ dx=-\frac1{12}$$</p> <hr> <p>$$1^2+2^2+3^2+\dots+n^2=\frac{n(n+1)(2n+1)}6$$</p> <p>$$\int_{-1}^0\frac{x(x+1)(2x+1)}6\ dx=0$$</p> <hr> <p><a href="https://math.stackexchange.com/questions/2252550/integrating-the-formula-for-the-sum-of-all-natural-numbers">Integrating the formula for the sum of all natural numbers</a></p>
Good Mathematical Puzzle Book for Couples
2,086,724
11
https://math.stackexchange.com/questions/2086724/good-mathematical-puzzle-book-for-couples
<p>So my partner and I thought it would be great if we could do some maths problems over Skype while we are apart (long distance). We'd like some recommendations on problem books that are ideal for working on <em>together</em>. We know there are quite a few questions online for best puzzle books in general but we'd like to know of any that have been written for or are suited for couple solving. </p> <p>We are both fairly proficient in general but are not experts in any one topic. Ideally, the book should cover multiple fields with fairly difficult logic/maths problems.</p> <p>As a starter list, what do people think of:</p> <ul> <li>Raymond M. Smullyan - The Godelian puzzle book</li> <li>Peter Winkler - Mathematical puzzles a connoisseur's collection</li> <li>Raymond M. Smullyan - What is the name of this book?</li> </ul> <p>I'm aware this is a rather strange request but thought it was worth asking just in case.</p>
[ { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" }, { "answer_id": 2086776, "score": 5, "text": "<p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p>\n\n<p>For instance, there is a <a href=\"http://rads.stackoverflow.com/amzn/click/0226282546\" rel=\"noreferrer\">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p>\n\n<p>Good luck!</p>\n" }, { "answer_id": 2086985, "score": 1, "text": "<p>No doubt, the book \"Fifty Challenging Puzzles\" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href=\"http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html\" rel=\"nofollow noreferrer\">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the \"Inspection Paradox\" and you will find lots of interesting applications and fun solving it over skype. HTH</p>\n" } ]
Question: Good Mathematical Puzzle Book for Couples Answer 1: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 2: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 3: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 4: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 5: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 6: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 7: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 8: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 9: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 10: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 11: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 12: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 13: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 14: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 15: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 16: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 17: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 18: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 19: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 20: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 21: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 22: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 23: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 24: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 25: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 26: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 27: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 28: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 29: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 30: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 31: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 32: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 33: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 34: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 35: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 36: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 37: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 38: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p> Answer 39: <p>You might have some luck with the books of Martin Gardner. Many of these books are collections from his monthly column on Mathematical Puzzles, Games, and Diversions for Scientific American.</p> <p>For instance, there is a <a href="http://rads.stackoverflow.com/amzn/click/0226282546" rel="noreferrer">widely-loved book</a> including his article on Hexaflexagons. But he has many books.</p> <p>Good luck!</p> Answer 40: <p>No doubt, the book "Fifty Challenging Puzzles" in probability by Mosteller will likely be the most entertaining for probability puzzles. A good list is documented <a href="http://bayesianthink.blogspot.com/2012/12/the-best-books-to-learn-probability.html" rel="nofollow noreferrer">here</a> along with the book from Mosteller. While other types of puzzles (word puzzles, geometric jigsaw etc) are interesting, probability puzzles adds an extra dimension to things because you can see it manifest all around you. Search for the "Inspection Paradox" and you will find lots of interesting applications and fun solving it over skype. HTH</p>
Can the integral of a function be larger than function itself?
2,087,706
11
https://math.stackexchange.com/questions/2087706/can-the-integral-of-a-function-be-larger-than-function-itself
<p>Are there any non zero(i.e not $f \equiv0 $) and non-negative continuous function $f$ defined on $[0,1]$, which satisfies $\int_0^x{f(t)}dt \geq f(x)~ \forall x$ in $[0,1]$?</p>
[ { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" }, { "answer_id": 2087726, "score": 18, "text": "<p>Use the integrating factor $e^{-x}$ to find\n$$\n\\frac{d}{dx}\\left(e^{-x}\\int_0^xf(s)ds\\right)\\le 0\n$$\nso that after integration\n$$\ne^{-x}\\int_0^xf(s)ds-e^0·0\\le 0\n$$\nand thus\n$$\nf(x)\\le\\int_0^xf(s)ds\\le 0\n$$</p>\n\n<p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p>\n\n<hr>\n\n<p>See also <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"noreferrer\">Grönwall lemma</a>.</p>\n" }, { "answer_id": 2087799, "score": 3, "text": "<p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p>\n\n<p>$$\\int_a^b f(t)dt \\le \\int_a^b cdt = c(b-a) &lt; M(b-a)$$</p>\n\n<p>Note the strict inequality. Then</p>\n\n<p>$$\\int_0^{x_1} f(t)dt$$</p>\n\n<p>$$=\\int_0^a f(t)dt + \\int_a^b f(t)dt + \\int_b^{x_1} f(t)dt$$</p>\n\n<p>$$\\le M(a-0) + \\int_a^b f(t)dt + M(x_1-b)$$</p>\n\n<p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p>\n\n<p>$$= Mx_1\\le M=f(x_1)$$</p>\n\n<p>Thus</p>\n\n<p>$$\\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>\n" } ]
Question: Can the integral of a function be larger than function itself? Answer 1: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 2: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 3: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 4: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 5: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 6: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 7: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 8: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 9: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 10: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 11: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 12: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 13: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 14: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 15: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 16: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 17: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 18: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 19: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 20: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 21: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 22: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 23: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 24: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 25: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 26: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 27: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 28: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 29: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 30: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 31: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 32: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 33: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 34: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 35: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 36: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 37: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 38: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p> Answer 39: <p>Use the integrating factor $e^{-x}$ to find $$ \frac{d}{dx}\left(e^{-x}\int_0^xf(s)ds\right)\le 0 $$ so that after integration $$ e^{-x}\int_0^xf(s)ds-e^0·0\le 0 $$ and thus $$ f(x)\le\int_0^xf(s)ds\le 0 $$</p> <p>As $f$ is supposed to be non-negative, the only solution is the zero function.</p> <hr> <p>See also <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="noreferrer">Grönwall lemma</a>.</p> Answer 40: <p>No. Since $f$ is continuous on a compact set it is bounded and achieves both its minimum and maximum, $m=f(x_0)$ and $M=f(x_1)$. Since $f$ is non-constant these are distinct. Pick any $c\in(m,M)$. Without loss of generality assume $x_0&lt;x_1$ By continuity there is some interval $(a,b)\in[0,1]$ containing $x_0$ on which $f&lt;c$. Thus</p> <p>$$\int_a^b f(t)dt \le \int_a^b cdt = c(b-a) &lt; M(b-a)$$</p> <p>Note the strict inequality. Then</p> <p>$$\int_0^{x_1} f(t)dt$$</p> <p>$$=\int_0^a f(t)dt + \int_a^b f(t)dt + \int_b^{x_1} f(t)dt$$</p> <p>$$\le M(a-0) + \int_a^b f(t)dt + M(x_1-b)$$</p> <p>$$&lt; M(a-0) + M(b-a) + M(x_1-b)$$</p> <p>$$= Mx_1\le M=f(x_1)$$</p> <p>Thus</p> <p>$$\int_0^{x_1} f(t)dt &lt; f(x_1)$$</p>
Integral $\int \frac{\sqrt{\sin x}}{\sqrt{\sin x} + \sqrt{\cos x}} dx$
2,088,405
11
https://math.stackexchange.com/questions/2088405/integral-int-frac-sqrt-sin-x-sqrt-sin-x-sqrt-cos-x-dx
<p>We have to evaluate the following integral:</p> <p>$$\int \frac{\sqrt{\sin x}}{\sqrt{\sin x} + \sqrt{\cos x}} dx$$</p> <p>I tried this:</p> <p>I multiplied both the numerator and denominator by $\sec x$<br> And substituted $\tan x = t$.</p> <p>But after that I got stuck.</p> <p>The book where this is taken from gives the following as the answer: $$\ln(1+t)-\frac14\ln(1+t^4)+\frac1{2\sqrt2}\ln\frac{t^2-\sqrt2t+1}{t^2+\sqrt2t+1}-\frac12\tan^{-1}t^2+c$$ where $t=\sqrt{\cot x}$</p>
[ { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" }, { "answer_id": 2088455, "score": 8, "text": "<p>$\\displaystyle \\mathcal{I} = \\int\\frac{\\sqrt{\\sin x}}{\\sqrt{\\sin x}+\\sqrt{\\cos x}}dx = \\int \\frac{\\sqrt{\\tan x}}{1+\\sqrt{\\tan x}}dx$</p>\n\n<p>substitute $\\tan x= t^2$ and $\\displaystyle dx = \\frac{1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle \\mathcal{I}= \\int\\frac{t}{(1+t)(1+t^4)}dt = \\frac{1}{2}\\int\\frac{\\bigg((1+t^4)+(1-t^4)\\bigg)t}{(1+t)(1+t^4)}dt$ </p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int\\frac{t}{1+t}dt+\\frac{1}{2}\\int\\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p>\n\n<p>$\\displaystyle = \\frac{1}{2}\\int \\frac{(1+t)-1}{1+t}dt+\\frac{1}{2}\\int \\frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p>\n\n<p>$\\displaystyle =-\\frac{t}{2}+\\frac{1}{2}\\ln|t+1|+\\frac{1}{4}\\int\\frac{2t}{1+t^4}+\\frac{1}{2}\\int\\frac{t^3}{1+t^4}dt-\\frac{1}{2}\\int \\frac{t^2-1}{1+t^4}dt-\\frac{1}{2}t+\\mathcal{C}$</p>\n\n<p>all integrals are easy except $\\displaystyle \\mathcal{J} = \\int\\frac{t^2-1}{1+t^4}dt = \\int\\frac{1-t^{-2}}{\\left(t+t^{-1}\\right)^2-2}dt = \\int\\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p>\n" }, { "answer_id": 2088415, "score": 5, "text": "<p>We rationalise the denominator to get $$I =\\int \\frac {\\sin x-\\sqrt {\\cos x\\sin x}}{\\sin x-\\cos x} dx $$ Writing everything in terms of $\\cot x $, we get $$I =\\int \\mathrm{csc}^2 x\\left(\\frac {\\sqrt {\\cot x}-1}{\\cot^3 x-\\cot^2 x+\\cot x-1}\\right) dx $$ Now substituting $u=\\cot x $ and further $v=\\sqrt {u} $ gives us $$I = -\\int\\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \\frac {2}{4-4\\sqrt {2}}\\int \\frac {v +\\sqrt {2}-1}{v^2 +\\sqrt {2}v +1} dv +\\frac {2}{4+4\\sqrt {2}}\\int \\frac {v-\\sqrt {2}-1}{v^2-\\sqrt {2 }v+1} dv+\\int \\frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p>\n" }, { "answer_id": 2088411, "score": 4, "text": "<p><em>HINT</em> multiply nominator and denominator by $\\frac{1}{\\sqrt{sin(x)}}$, then $t = \\sqrt{cot(x)} $ after all you'll have $\\displaystyle\\frac{2t}{(t^4 + 1)(t+1)}$</p>\n" } ]
Question: Integral $\int \frac{\sqrt{\sin x}}{\sqrt{\sin x} + \sqrt{\cos x}} dx$ Answer 1: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 2: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 3: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 4: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 5: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 6: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 7: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 8: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 9: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 10: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 11: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 12: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 13: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 14: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 15: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 16: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 17: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 18: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 19: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 20: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 21: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 22: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 23: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 24: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 25: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 26: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 27: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 28: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 29: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 30: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 31: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 32: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 33: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 34: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 35: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 36: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 37: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 38: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 39: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 40: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 41: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 42: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 43: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 44: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 45: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 46: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 47: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 48: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 49: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 50: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 51: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 52: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 53: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 54: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 55: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 56: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 57: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p> Answer 58: <p>$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$</p> <p>substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$</p> <p>$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$ </p> <p>$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$</p> <p>$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$</p> <p>$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$</p> <p>all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$</p> Answer 59: <p>We rationalise the denominator to get $$I =\int \frac {\sin x-\sqrt {\cos x\sin x}}{\sin x-\cos x} dx $$ Writing everything in terms of $\cot x $, we get $$I =\int \mathrm{csc}^2 x\left(\frac {\sqrt {\cot x}-1}{\cot^3 x-\cot^2 x+\cot x-1}\right) dx $$ Now substituting $u=\cot x $ and further $v=\sqrt {u} $ gives us $$I = -\int\frac {2v }{v^5+ v^4 + v+1} dv $$ Performing a partial fraction decomposition we have $$I = \frac {2}{4-4\sqrt {2}}\int \frac {v +\sqrt {2}-1}{v^2 +\sqrt {2}v +1} dv +\frac {2}{4+4\sqrt {2}}\int \frac {v-\sqrt {2}-1}{v^2-\sqrt {2 }v+1} dv+\int \frac {1}{1+v} dv =I_1 +I_2 +I_3$$ Hope you can take it from here. </p> Answer 60: <p><em>HINT</em> multiply nominator and denominator by $\frac{1}{\sqrt{sin(x)}}$, then $t = \sqrt{cot(x)} $ after all you'll have $\displaystyle\frac{2t}{(t^4 + 1)(t+1)}$</p>
How does changing metrics help to find solutions to a partial differential equation?
2,089,153
11
https://math.stackexchange.com/questions/2089153/how-does-changing-metrics-help-to-find-solutions-to-a-partial-differential-equat
<p>I am taking a course in functional analysis and while reviewing the definition of a metric and various examples, my professor mentioned that one of the reason we care about swapping (possibly equivalent?) metrics is in pde, since a solution to a pde may be very difficult to find in one metric space, but easier in another.</p> <p>I was hoping the mathstack exchange community might be able to expand on this and maybe provide me with some examples hopefully that I can understand. </p> <p>Thanks!</p>
[ { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" }, { "answer_id": 2089223, "score": 12, "text": "<p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p>\n\n<p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p>\n\n<p>Let $\\Omega \\subseteq \\mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that\n$$\n\\begin{cases}\n-\\Delta u =f &amp; \\text{in }\\Omega \\\\\nu =0 &amp; \\text{on }\\partial \\Omega\n\\end{cases}\n$$\nfor a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \\in C^0(\\bar{\\Omega})$ such that the above problem does not admit a solution $u \\in C^2(\\Omega)$, where here the notion of solution is pointwise, i.e. $-\\Delta u(x) = f(x)$ for all $x \\in \\Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\\Omega)$ for data in $C^0(\\Omega)$. </p>\n\n<p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \\in H^1_0(\\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy \n$$\n\\int_\\Omega \\nabla u \\cdot \\nabla v = \\int_\\Omega f v \\text{ for all }v \\in H^1_0(\\Omega).\n$$\nWithin this context it's possible to prove that there exists a unique $u \\in H^1_0(\\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that \n$$\n\\Vert u \\Vert_{H^1_0} \\le C_1 \\Vert f \\Vert_{L^2} \\le C_2 \\Vert f \\Vert_{C^0}\n$$\nfor some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to \"measure\" the dependence of solutions on the data.</p>\n\n<p>This is, of course, just one small example. There are many, many more...</p>\n" }, { "answer_id": 2089212, "score": 9, "text": "<p>Here's one perspective, involving the notion of distance on function spaces. </p>\n\n<p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine \"closeness\" of approximations. </p>\n\n<p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p>\n\n<hr>\n\n<p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \\Delta = -\\sum\\partial_i^2$ defined on compactly supported smooth functions $D = C^\\infty_0(\\Omega)$ in a two-dimensional bounded domain $\\Omega\\subset\\Bbb{R}^2$, and $H = L^2(\\Omega)$.</p>\n\n<p>The equation $\\Delta u = f$ is really a question about linear algebra in $H$: \"Given $f\\in H$, can we find $u\\in D$ such that $Tu = f$?\" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p>\n\n<p>We can define a new norm on $D$ by setting $\\|u\\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\\Omega)$.)</p>\n\n<p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\\to V^*$ defined by taking $f$ to $(f,\\cdot)$, where $\\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\\to V$ which gives solutions to the weak equation.</p>\n\n<p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as \"elliptic regularity.\")</p>\n\n<hr>\n\n<p>[1] The proof of Riesz (by which I mean $H\\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>\n" } ]
Question: How does changing metrics help to find solutions to a partial differential equation? Answer 1: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 2: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 3: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 4: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 5: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 6: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 7: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 8: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 9: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 10: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 11: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 12: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 13: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 14: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 15: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 16: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 17: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 18: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 19: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 20: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 21: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 22: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 23: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 24: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 25: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 26: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 27: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 28: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 29: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 30: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 31: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 32: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 33: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 34: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 35: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 36: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 37: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 38: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p> Answer 39: <p>This is really embedded into one of the central questions in PDE, namely well-posedness. The well-posedness of a PDE requires three things: existence of a solution, uniqueness of solutions, and continuous dependence of the solution on the data in the problem. In each of these statements there is actually a crucial missing ingredient: existence in what space and in what sense, uniqueness in what space or collection of functions, and continuity with respect to which metric or topology?</p> <p>It is in the qualification of the spaces / metrics in these three issues that we encounter your professor's point. Here's a very important example in the theory of elliptic equations.</p> <p>Let $\Omega \subseteq \mathbb{R}^n$ be bounded and open with a smooth boundary and consider the Poisson problem of finding $u$ such that $$ \begin{cases} -\Delta u =f &amp; \text{in }\Omega \\ u =0 &amp; \text{on }\partial \Omega \end{cases} $$ for a given function $f$ (the data in the problem). Remarkably, it's possible to find explicit examples of functions $f \in C^0(\bar{\Omega})$ such that the above problem does not admit a solution $u \in C^2(\Omega)$, where here the notion of solution is pointwise, i.e. $-\Delta u(x) = f(x)$ for all $x \in \Omega$. Framed in terms of the above discussion, the problem is <em>not</em> well-posed in the space $C^2(\Omega)$ for data in $C^0(\Omega)$. </p> <p>However, if we switch to a different function space and a different notion of solution, then we can produce a solution for the same $f$ given above. If we work with weak solutions, then we require $u \in H^1_0(\Omega)$ (the $L^2$ Sobolev space of order $1$ with functions vanishing on the boundary), and a solution must satisfy $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \text{ for all }v \in H^1_0(\Omega). $$ Within this context it's possible to prove that there exists a unique $u \in H^1_0(\Omega)$ solving the PDE in the above weak sense, and that $u$ depends continuously on $f$ in the $H^1_0$ metric in the sense that $$ \Vert u \Vert_{H^1_0} \le C_1 \Vert f \Vert_{L^2} \le C_2 \Vert f \Vert_{C^0} $$ for some constants $C_1,C_2&gt;0$ that don't depend on $f$. This means that the problem <em>is</em> well-posed in this weak sense, and the key difference is changing our notion of solution and the metric that we use to define the function space for solutions and to "measure" the dependence of solutions on the data.</p> <p>This is, of course, just one small example. There are many, many more...</p> Answer 40: <p>Here's one perspective, involving the notion of distance on function spaces. </p> <p>Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. </p> <p>Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.)</p> <hr> <p>To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$.</p> <p>The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, <strong>the metric on $H$ is not suitable for finding a solution to the equation.</strong></p> <p>We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.)</p> <p>In this <strong>new norm</strong> where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation.</p> <p>(Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of <em>Partial Differential Equations</em> by Evans, or Ch 7-8 of <em>Elliptic Partial Differential Operators of Second Order</em> by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.")</p> <hr> <p>[1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.</p>
Is there a term for a magma with identity (only)?
2,089,449
11
https://math.stackexchange.com/questions/2089449/is-there-a-term-for-a-magma-with-identity-only
<p>If we start from magmas and consider: associativity, identity, invertibility (divisibility). We will theoretically get $2^3=8$ structures by regarding whether such structure possess these properties. As is shown in the picture: <a href="https://en.wikipedia.org/wiki/Magma_(algebra)#/media/File:Magma_to_group2.svg" rel="noreferrer">https://en.wikipedia.org/wiki/Magma_(algebra)#/media/File:Magma_to_group2.svg</a></p> <p>There miss two structures - associative, divisible magma and magma with identity. Well, we know associative quasigroup is a group. So, there is only one left. Is there a term for it? </p> <p>And it is necessary to differ between invertibility and divisibility?</p>
[ { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" }, { "answer_id": 2089767, "score": 10, "text": "<p>The term \"unital magma\" does get used, including by Bourbaki (Algebra I, p. 12):</p>\n\n<blockquote>\n <p><em>A magma with an identity element is called a unital magma.</em>\n Here, an identity element is defined to be two-sided.</p>\n</blockquote>\n\n<p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p>\n" }, { "answer_id": 2089530, "score": 2, "text": "<p>Here's an example of such a structure.</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\cdot &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; a\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>We see that $e$ is an identity element. However, $(S, \\cdot)$ is not associative, since $(a\\cdot b)\\cdot b = a\\cdot b = a$, while $a\\cdot (b\\cdot b) = a \\cdot a = b$.</p>\n\n<p>Also, $(S, \\cdot)$ does not have inverses. There is no element $x\\in S$ such that $a\\cdot x = e$.</p>\n\n<p>Also, though you didn't mention commutativity, $(S, \\cdot)$ is not commutative: $a\\cdot b = b$, while $b\\cdot a = a$. We actually could make it commutative, though:</p>\n\n<p>$$\\begin{array}{c|ccc}\n\\star &amp; e &amp; a &amp; b \\\\\n\\hline\ne &amp; e &amp; a &amp; b\\\\\na &amp; a &amp; b &amp; b\\\\\nb &amp; b &amp; b &amp; a\\\\\n\\end{array}$$</p>\n\n<p>$(S, \\star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \\star)$ is still non-associative, since $(a\\star b)\\star b = b\\star b = a$, while $a \\star (b\\star b) = a \\star a = b$.</p>\n\n<p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>\n" } ]
Question: Is there a term for a magma with identity (only)? Answer 1: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 2: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 3: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 4: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 5: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 6: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 7: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 8: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 9: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 10: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 11: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 12: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 13: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 14: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 15: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 16: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 17: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 18: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 19: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 20: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 21: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 22: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 23: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 24: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 25: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 26: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 27: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 28: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 29: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 30: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 31: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 32: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 33: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 34: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 35: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 36: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 37: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 38: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p> Answer 39: <p>The term "unital magma" does get used, including by Bourbaki (Algebra I, p. 12):</p> <blockquote> <p><em>A magma with an identity element is called a unital magma.</em> Here, an identity element is defined to be two-sided.</p> </blockquote> <p>Divisibility does not imply invertibility, so they are distinct. One can have divisibility without an identity element; one needs both for invertibility. Note: a left inverse and a right inverse might not be the same, but invertibility does not demand this.</p> Answer 40: <p>Here's an example of such a structure.</p> <p>$$\begin{array}{c|ccc} \cdot &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; a\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>We see that $e$ is an identity element. However, $(S, \cdot)$ is not associative, since $(a\cdot b)\cdot b = a\cdot b = a$, while $a\cdot (b\cdot b) = a \cdot a = b$.</p> <p>Also, $(S, \cdot)$ does not have inverses. There is no element $x\in S$ such that $a\cdot x = e$.</p> <p>Also, though you didn't mention commutativity, $(S, \cdot)$ is not commutative: $a\cdot b = b$, while $b\cdot a = a$. We actually could make it commutative, though:</p> <p>$$\begin{array}{c|ccc} \star &amp; e &amp; a &amp; b \\ \hline e &amp; e &amp; a &amp; b\\ a &amp; a &amp; b &amp; b\\ b &amp; b &amp; b &amp; a\\ \end{array}$$</p> <p>$(S, \star)$ still has an identity, still lacks inverses, and is now commutative. And $(S, \star)$ is still non-associative, since $(a\star b)\star b = b\star b = a$, while $a \star (b\star b) = a \star a = b$.</p> <p>I guess one way to make a magma with identity is to start with whatever magma you want, and then add another element, $e$, and extend your binary operation to make $e$ act as an identity. So I think this might explain why the some people suggested in the comments that this type of structure might not be very interesting, and why it doesn't have its own name.</p>
If there is one perfect square in an arithmetic progression, then there are infinitely many
2,090,325
11
https://math.stackexchange.com/questions/2090325/if-there-is-one-perfect-square-in-an-arithmetic-progression-then-there-are-infi
<p>Consider the following positive integers: $$a,a+d,a+2d,\dots$$ Suppose there is a perfect square in the above list of numbers. Then prove that there are infinitely many perfect square in the above list. How can I do this?</p> <p>At first I started in this way: Let the $n$th term is perfect square. Therefore, $$t_n=a+(n-1)d=m^2.$$ Then I think that I will put values at the position of $n$. But I failed to find anything from this level. Can somebody help me?</p>
[ { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" }, { "answer_id": 2090330, "score": 31, "text": "<p>Note that $$(m+d)^2=m^2+(2m+d)\\cdot d$$</p>\n" }, { "answer_id": 2090331, "score": 5, "text": "<p>Let $n^2$ be the known square.</p>\n\n<p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p>\n" }, { "answer_id": 2090339, "score": 2, "text": "<p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p>\n\n<p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p>\n\n<p>The RHS is a perfect square, and the left side\nis the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>\n" } ]
Question: If there is one perfect square in an arithmetic progression, then there are infinitely many Answer 1: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 2: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 3: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 4: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 5: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 6: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 7: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 8: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 9: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 10: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 11: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 12: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 13: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 14: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 15: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 16: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 17: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 18: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 19: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 20: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 21: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 22: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 23: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 24: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 25: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 26: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 27: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 28: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 29: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 30: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 31: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 32: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 33: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 34: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 35: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 36: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 37: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 38: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 39: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 40: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 41: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 42: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 43: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 44: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 45: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 46: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 47: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 48: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 49: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 50: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 51: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 52: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 53: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 54: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 55: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 56: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 57: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p> Answer 58: <p>Note that $$(m+d)^2=m^2+(2m+d)\cdot d$$</p> Answer 59: <p>Let $n^2$ be the known square.</p> <p>Then $n^2+kd=m^2$ is equivalent to $kd=(m-n)(m+n)$. You can take $m-n$ to be any multiple of $d$, and $k$ follows.</p> Answer 60: <p>Starting from where you left, suppose the $k$th term is a perfect square such that $$a_k = a_1 + (k-1)d = p^2$$</p> <p>Now add $2pmd + m^2d^2 $ to both sides where $m$ is a natural number giving us, $$\Rightarrow a_1 + (k-1)d + 2pmd + m^2d^2 = p^2 + 2pmd + m^2d^2 $$ $$\Rightarrow a_1 + [(k-1)+2pm+m^2]d = (p+md)^2$$</p> <p>The RHS is a perfect square, and the left side is the $(k-1+2pm+m^2)$th term for infinitely many values of $m $. Hope it helps. </p>
Place the numbers by their size.
2,093,052
11
https://math.stackexchange.com/questions/2093052/place-the-numbers-by-their-size
<p>Place the following numbers by their size:</p> <p>$$A=2^{4^{2^{.^{.^{.^{2^{4}}}}}}},B=4^{2^{4^{.^{.^{.{4^{2}}}}}}},C=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>In number $C$ there are $2000$ "$2$" digits, and in numbers $B,A$ there are $500$ "$2$" and $500$ "$4$" digits. It seems to me that $C&gt;B&gt;A$, but I can't give a proof. Any hints?</p> <p><a href="http://artofproblemsolving.com/community/user/29428" rel="nofollow noreferrer">Here</a> is the same problem in art of problem solving. I hope that it helps.</p>
[ { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" }, { "answer_id": 2100448, "score": 5, "text": "<p>Alternative and relatively easy solution without using logarithms. Let us set </p>\n\n<p>$$h=2^{4^{2^4}}$$</p>\n\n<p>$$j=4^{2^{4^2}}$$</p>\n\n<p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p>\n\n<p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p>\n\n<p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p>\n\n<p>$$j=4^{2^{16}} = 2^{2 \\cdot 2^{16}} = 2^{ 2^{17}} $$</p>\n\n<p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p>\n\n<p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p>\n\n<p>$$A=(A_1)^h $$\n$$B=(B_1)^j $$\n$$C=(C_1)^k $$</p>\n\n<p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \\,\\,\\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \\,\\,\\, $ elements instead of $2000$.</p>\n\n<p>Because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, then this would necessarily imply $C&gt;A&gt;B \\,\\,\\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \\,\\,\\, $. So we can set </p>\n\n<p>$$A_1=(A_2)^h $$\n$$B_1=(B_2)^j $$\n$$C_1=(C_2)^k $$</p>\n\n<p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\\cdot 4 =492 \\,\\,\\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \\cdot 16=1968 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \\,\\,\\, $, and then $C&gt;A&gt;B \\,\\,\\, $.</p>\n\n<p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\\cdot 4 \\,\\,\\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \\cdot 16 \\,\\,\\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \\,\\,\\, $, showing that $C_m&gt;A_m&gt;B_m \\,\\,\\, \\,\\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \\,\\, \\,\\,\\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \\,\\,\\, \\,\\, $ and so on, until $C&gt;A&gt;B \\,\\,\\, \\,\\, $.</p>\n\n<p>So, repeating this procedure $m=124 \\,\\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\\cdot 4 =4 \\,\\,\\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \\cdot 16=16 \\,\\,\\, $ elements. Since $ A_{124}=h \\,\\, \\,\\,\\, $, $B_{124}=j \\,\\, \\,\\,\\, $, and $C_{124}=k \\,\\, \\,\\,\\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \\,\\,\\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \\,\\,\\, $. </p>\n" }, { "answer_id": 2097671, "score": 3, "text": "<p>The problem can be re-formulated in algorithmic style as follows:\n$$\n\\begin{matrix}\na_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\\\\n &amp; &amp; c_1 = 2^{c_0} \\\\\na_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\\\\n &amp; &amp; c_3 = 2^{c_2} \\\\\na_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\na_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\\\\n &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\\\\na_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\n\\cdot &amp; \\cdot &amp; \\cdot \\\\\nA = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}}\n\\end{matrix}\n$$\nNow we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p>\n\n<p>The logarithm with base $2$ may be defined formally as $\\,\\operatorname{lg}(x) = \\ln(x)/\\ln(2)$ .\n<BR>But in the practice below its meaning is much simpler, as might become clear from a few examples:\n$\\lg(2) = 1 \\; ; \\; \\lg(4) = \\lg\\left(2^2\\right) = 2 \\; ; \\; \\lg(8) = \\lg\\left(2^3\\right) = 3$ . In general : $\\,\\operatorname{lg}\\left(2^x\\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two.\nWe could have called this function \"chopper\", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms:\nleave it. The whole employment of our \"exponent chopper\" is like in here: $2^x &gt; 2^y \\; \\Longleftrightarrow \\; x &gt; y$ . So what's the problem?</p>\n\n<p>We are going to employ <A HREF=\"http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698\"><I>mathematical induction</I></A>. To that end, define subsequent \"approximations\"\nof $A,B,C$ as follows:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{4n}\n$$\nAs the first induction step, we calculate:\n$$\nA_1 = a_2 = 2^4 = 16 \\quad ; \\quad B_1= b_2 = 4^2 = 16 \\quad ; \\quad C_1 = c_4 = 2^{2^{2^2}} = 65536\n$$\nThat's not enough to establish an inequality between $A$ and $B$, so we take a second step for these:\n$$\nA_2 = a_4 = 2^{4^{2^4}} \\quad ; \\quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\\cdot{2^{4^2}}}\n$$\nLogarithms base $2$ :\n$$\n\\operatorname{lg}\\left(2^{4^{2^4}}\\right) = 4^{2^4} = 2^{2\\cdot{2^4}} \\quad ; \\quad\n\\operatorname{lg}\\left(4^{2^{4^2}}\\right) = 2\\cdot 2^{4^2} = 2^{1+4^2}\n$$\nAnd again:\n$$\n\\operatorname{lg}\\left(4^{2^4}\\right) = 2\\cdot 2^4 = 32\\quad ; \\quad\n\\operatorname{lg}\\left(2\\cdot 2^{4^2}\\right) = 1 + 4^2 = 17\n$$\nFrom $\\,32 &gt; 17\\,$ it follows that $\\;A_2 &gt; B_2\\,$ , namely: $\\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p>\n\n<p>Now assume that $\\;C_n &gt; A_n &gt; B_n \\gg 1\\;$ and prove that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ .\n$$\n\\begin{matrix}\nA_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\\\\nA_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}}\n\\end{matrix}\n$$\nTaking logarithms two times:\n$$\n\\begin{matrix}\n\\operatorname{lg}\\left(A_{n+1}\\right) = 4^{A_n} &amp;\n\\operatorname{lg}\\left(B_{n+1}\\right) = 2\\cdot 2^{B_n} &amp;\n\\operatorname{lg}\\left(C_{n+1}\\right) = 2^{2^{2^{C_n}}} \\\\\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(A_{n+1}\\right)\\right) = 2A_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(B_{n+1}\\right)\\right) = 1+B_n &amp;\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(C_{n+1}\\right)\\right) = 2^{2^{C_n}} \\end{matrix}\n$$\nFrom which it follows that $\\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\\;$ as well. Especially:\n$$\nC_2 &gt; A_2 &gt; B_2\n$$\nSo for all $\\,n \\ge 2\\,$ we have $\\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\\,n = 500\\,$ and you're done.<BR>Conclusion : $\\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p>\n\n<p><a href=\"https://i.stack.imgur.com/aygQY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aygQY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$\n(instead of $2000$). Then we have:\n$$\nA_n = a_{2n} \\quad ; \\quad B_n = b_{2n} \\quad ; \\quad C_n = c_{3n}\n$$\nAnd the first induction step results in an equality $\\;A_1=B_1=C_1$ :\n$$\nA_1 = 2^4 = 16 \\quad ; \\quad B_1 = 4^2 = 16 \\quad ; \\quad C_1 = 2^{2^2} = 16\n$$\nA second step is needed to establish an inequality:\n$$\n\\operatorname{lg}(\\operatorname{lg}(A_2)) = 32 \\quad ; \\quad\n\\operatorname{lg}(\\operatorname{lg}(B_2)) = 17 \\\\\n\\operatorname{lg}(\\operatorname{lg}(C_2)) =\n\\operatorname{lg}\\left(\\operatorname{lg}\\left(2^{2^{2^{16}}}\\right)\\right) =\n\\operatorname{lg}\\left(2^{2^{16}}\\right) = 2^{16} = 65536\n$$\nFinally resulting in the same as before: $\\;C &gt; A &gt; B$ .</p>\n" }, { "answer_id": 2094822, "score": 2, "text": "<p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\\cdot4^y$ if $y\\ge2$. This is because:</p>\n\n<p>$$2^x &gt; 2^{3y}$$\n$$4^y = 2^{2y}$$</p>\n\n<p>so their ratio is:</p>\n\n<p>$${2^x\\over4^y} &gt; {2^{3y}\\over2^{2y}}=2^y$$</p>\n\n<p>In similar way we have that $4^x &gt; 3\\cdot2^y$.</p>\n\n<p>So if $y\\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p>\n\n<p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\\cdots{997}\\cdots^2}} &gt; 3\\cdot4$, the rest follows - so $C&gt;A$.</p>\n" }, { "answer_id": 2103730, "score": 1, "text": "<p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\\cdot z}$, let:</p>\n\n<p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\\mathbf{\\left(2\\cdot a_{i}\\right)}}}$</p>\n\n<p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\\cdot 2^{b_i}}=2^{2^{\\mathbf{\\left(b_i+1\\right)}}}$</p>\n\n<p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\\mathbf{\\left(2^{2^{c_{i}}}\\right)}}}$</p>\n\n<p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p>\n\n<p>Now, if $1&lt;b_i\\leq a_i\\leq c_i$ then $\\mathbf{(b_i+1)}\\leq (a_i+1) &lt; \\mathbf{(2\\cdot a_i)} \\leq (2\\cdot c_i) &lt; 2^{c_i} &lt; \\mathbf{\\left( 2^{2^{c_i}}\\right)}$, so if we look at what is in parentheses \"on top\" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p>\n\n<p>$1&lt;b_1=a_1&lt;c_1 \\implies 1&lt;b_2&lt;a_2&lt;c_2 \\implies\\dots \\implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p>\n\n<blockquote>\n <p>$B&lt;A&lt;C$.</p>\n</blockquote>\n" } ]
Question: Place the numbers by their size. Answer 1: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 2: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 3: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 4: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 5: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 6: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 7: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 8: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 9: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 10: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 11: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 12: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 13: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 14: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 15: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 16: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 17: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 18: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 19: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 20: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 21: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 22: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 23: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 24: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 25: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 26: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 27: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 28: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 29: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 30: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 31: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 32: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 33: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 34: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 35: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 36: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 37: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 38: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 39: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 40: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 41: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 42: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 43: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 44: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 45: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 46: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 47: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 48: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 49: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 50: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 51: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 52: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 53: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 54: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 55: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 56: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 57: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 58: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 59: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 60: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 61: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 62: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 63: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 64: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 65: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 66: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 67: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 68: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 69: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 70: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 71: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 72: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 73: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 74: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 75: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 76: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote> Answer 77: <p>Alternative and relatively easy solution without using logarithms. Let us set </p> <p>$$h=2^{4^{2^4}}$$</p> <p>$$j=4^{2^{4^2}}$$</p> <p>$$k=2^{2^{2^{.^{.^{.^{2^{2}}}}}}}$$</p> <p>where $h $ and $j $ only contain four numbers (two $2$ and two $4$), and $k $ contains sixteen $2$. Calculating the exponential towers starting from the top, we have </p> <p>$$h=2^{4^{16}} = 2^{2^{32}} $$</p> <p>$$j=4^{2^{16}} = 2^{2 \cdot 2^{16}} = 2^{ 2^{17}} $$</p> <p>so that $h&gt;j $ . Also, considering $k $ and solving its first two exponentials from the top, we get an exponential tower formed by fourteen elements where the first thirteen are $2$ and the last one is $16$. Therefore we clearly have $k&gt;h&gt;j$.</p> <p>Now let us consider the numbers $A , B , C $ reported in the OP. Again starting to solve the exponentials from the top, we can rewrite them as</p> <p>$$A=(A_1)^h $$ $$B=(B_1)^j $$ $$C=(C_1)^k $$</p> <p>where $A_1$ and $B_1 $ are exponential towers similar to the initial towers $A $ and $B $, respectively, but with $500-4=496 \,\,\, $ elements instead of $500$; and where $C_1$ is an exponential tower similar to the initial $C $ but with $2000-16=1984 \,\,\, $ elements instead of $2000$.</p> <p>Because $k&gt;h&gt;j \,\,\, $, if we could show that $C_1&gt;A_1&gt;B_1 \,\,\, $, then this would necessarily imply $C&gt;A&gt;B \,\,\, $. With this in mind, we can now repeat the same procedure above, applying it to $A_1, B_1, C_1 \,\,\, $. So we can set </p> <p>$$A_1=(A_2)^h $$ $$B_1=(B_2)^j $$ $$C_1=(C_2)^k $$</p> <p>where $A_2$ and $B_2 $ are again exponential towers similar to $A $ and $B$, respectively, but with $500-2\cdot 4 =492 \,\,\, $ elements instead of $500$; and where $C_2$ is an exponential tower similar to $C $ but with $2000-2 \cdot 16=1968 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, if we could show that $C_2&gt;A_2&gt;B_2$, then this would necessarily imply $C_1&gt;A_1&gt;B_1 \,\,\, $, and then $C&gt;A&gt;B \,\,\, $.</p> <p>To generalize, after repeating this procedure $m $ times, we obtain two numbers $A_m$ and $B_m $ that are exponential towers similar to $A $ and $B $, respectively, but with $500-m\cdot 4 \,\,\,$ elements instead of $500$, and a third number $C_m$ that is an exponential tower similar to $C $ but with $2000-m \cdot 16 \,\,\, $ elements instead of $2000$. As above, because $k&gt;h&gt;j \,\,\, $, showing that $C_m&gt;A_m&gt;B_m \,\,\, \,\, $ necessarily implies $C_{m-1}&gt;A_{m-1}&gt;B_{m-1} \,\, \,\,\, $, which in turn implies $C_{m-2}&gt;A_{m-2}&gt;B_{m-2} \,\,\, \,\, $ and so on, until $C&gt;A&gt;B \,\,\, \,\, $.</p> <p>So, repeating this procedure $m=124 \,\,$ times, we arrive to a point where $A_{124} $ and $B_{124} $ are exponential towers with $500-124\cdot 4 =4 \,\,\, $ elements, and $C_{124}$ is an exponential tower with $2000-124 \cdot 16=16 \,\,\, $ elements. Since $ A_{124}=h \,\, \,\,\, $, $B_{124}=j \,\, \,\,\, $, and $C_{124}=k \,\, \,\,\, $, we have $ C_{124}&gt;A_{124}&gt;B_{124} \,\,\, $, which necessarily implies, as shown above, $C&gt;A&gt;B \,\,\, $. </p> Answer 78: <p>The problem can be re-formulated in algorithmic style as follows: $$ \begin{matrix} a_0 = 1 &amp; b_0 = 1 &amp; c_0 = 1 \\ &amp; &amp; c_1 = 2^{c_0} \\ a_1 = 4^{a_0} &amp; b_1 = 2^{b_0} &amp; c_2 = 2^{c_1} \\ &amp; &amp; c_3 = 2^{c_2} \\ a_2 = 2^{a_1} &amp; b_2 = 4^{b_1} &amp; c_4 = 2^{c_3} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ a_k = 4^{a_{k-1}} &amp; b_k = 2^{b_{k-1}} &amp; c_{2k} = 2^{c_{2k-1}} \\ &amp; &amp; c_{2k+1} = 2^{c_{2k}} \\ a_{k+1} = 2^{a_k} &amp; b_{k+1} = 4^{b_k} &amp; c_{2k+2} = 2^{c_{2k+1}} \\ \cdot &amp; \cdot &amp; \cdot \\ \cdot &amp; \cdot &amp; \cdot \\ A = a_{1000} = 2^{a_{999}} &amp; B = b_{1000} = 4^{b_{999}} &amp; C = c_{2000} = 2^{c_{1999}} \end{matrix} $$ Now we know how to <I>calculate</I> these huge numbers - though very much in principle :-)</p> <p>The logarithm with base $2$ may be defined formally as $\,\operatorname{lg}(x) = \ln(x)/\ln(2)$ . <BR>But in the practice below its meaning is much simpler, as might become clear from a few examples: $\lg(2) = 1 \; ; \; \lg(4) = \lg\left(2^2\right) = 2 \; ; \; \lg(8) = \lg\left(2^3\right) = 3$ . In general : $\,\operatorname{lg}\left(2^x\right) = x$ .<BR>We need a function to chop off the exponent from a power (tower) of two. We could have called this function "chopper", but the proper name for it in common mathematics is $2$-logarithm. Take it, or if you don't like logarithms: leave it. The whole employment of our "exponent chopper" is like in here: $2^x &gt; 2^y \; \Longleftrightarrow \; x &gt; y$ . So what's the problem?</p> <p>We are going to employ <A HREF="http://math.stackexchange.com/questions/1588666/why-do-we-do-mathematical-induction-only-for-positive-whole-numbers/1588698#1588698"><I>mathematical induction</I></A>. To that end, define subsequent "approximations" of $A,B,C$ as follows: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{4n} $$ As the first induction step, we calculate: $$ A_1 = a_2 = 2^4 = 16 \quad ; \quad B_1= b_2 = 4^2 = 16 \quad ; \quad C_1 = c_4 = 2^{2^{2^2}} = 65536 $$ That's not enough to establish an inequality between $A$ and $B$, so we take a second step for these: $$ A_2 = a_4 = 2^{4^{2^4}} \quad ; \quad B_2 = b_4 = 4^{2^{4^2}} = 2^{2\cdot{2^{4^2}}} $$ Logarithms base $2$ : $$ \operatorname{lg}\left(2^{4^{2^4}}\right) = 4^{2^4} = 2^{2\cdot{2^4}} \quad ; \quad \operatorname{lg}\left(4^{2^{4^2}}\right) = 2\cdot 2^{4^2} = 2^{1+4^2} $$ And again: $$ \operatorname{lg}\left(4^{2^4}\right) = 2\cdot 2^4 = 32\quad ; \quad \operatorname{lg}\left(2\cdot 2^{4^2}\right) = 1 + 4^2 = 17 $$ From $\,32 &gt; 17\,$ it follows that $\;A_2 &gt; B_2\,$ , namely: $\;2^{2^{32}} &gt; 2^{2^{17}}$ .</p> <p>Now assume that $\;C_n &gt; A_n &gt; B_n \gg 1\;$ and prove that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}$ . $$ \begin{matrix} A_n = a_{2n} &amp; B_n = b_{2n} &amp; C_n = c_{4n} \\ A_{n+1} = a_{2n+2} = 2^{4^{A_n}} &amp; B_{n+1} = b_{2n+2} = 4^{2^{B_n}} &amp; C_{n+1} = c_{4n+4} = 2^{2^{2^{2^{C_n}}}} \end{matrix} $$ Taking logarithms two times: $$ \begin{matrix} \operatorname{lg}\left(A_{n+1}\right) = 4^{A_n} &amp; \operatorname{lg}\left(B_{n+1}\right) = 2\cdot 2^{B_n} &amp; \operatorname{lg}\left(C_{n+1}\right) = 2^{2^{2^{C_n}}} \\ \operatorname{lg}\left(\operatorname{lg}\left(A_{n+1}\right)\right) = 2A_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(B_{n+1}\right)\right) = 1+B_n &amp; \operatorname{lg}\left(\operatorname{lg}\left(C_{n+1}\right)\right) = 2^{2^{C_n}} \end{matrix} $$ From which it follows that $\;C_{n+1} &gt; A_{n+1} &gt; B_{n+1}\;$ as well. Especially: $$ C_2 &gt; A_2 &gt; B_2 $$ So for all $\,n \ge 2\,$ we have $\;C_n &gt; A_n &gt; B_n$ . Now specialize for $\,n = 500\,$ and you're done.<BR>Conclusion : $\;C &gt; A &gt; B$ . &nbsp; Suggestive picture (not pretending anything more):</p> <p><a href="https://i.stack.imgur.com/aygQY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aygQY.jpg" alt="enter image description here"></a></p> <p>BONUS. Somewhat more of a challenge is the following modification of the question:<BR>in $C$ there are $1500$ numbers $2$ (instead of $2000$). Then we have: $$ A_n = a_{2n} \quad ; \quad B_n = b_{2n} \quad ; \quad C_n = c_{3n} $$ And the first induction step results in an equality $\;A_1=B_1=C_1$ : $$ A_1 = 2^4 = 16 \quad ; \quad B_1 = 4^2 = 16 \quad ; \quad C_1 = 2^{2^2} = 16 $$ A second step is needed to establish an inequality: $$ \operatorname{lg}(\operatorname{lg}(A_2)) = 32 \quad ; \quad \operatorname{lg}(\operatorname{lg}(B_2)) = 17 \\ \operatorname{lg}(\operatorname{lg}(C_2)) = \operatorname{lg}\left(\operatorname{lg}\left(2^{2^{2^{16}}}\right)\right) = \operatorname{lg}\left(2^{2^{16}}\right) = 2^{16} = 65536 $$ Finally resulting in the same as before: $\;C &gt; A &gt; B$ .</p> Answer 79: <p>One approach is to use that if $x &gt; 3y$ we have $2^x &gt; 3\cdot4^y$ if $y\ge2$. This is because:</p> <p>$$2^x &gt; 2^{3y}$$ $$4^y = 2^{2y}$$</p> <p>so their ratio is:</p> <p>$${2^x\over4^y} &gt; {2^{3y}\over2^{2y}}=2^y$$</p> <p>In similar way we have that $4^x &gt; 3\cdot2^y$.</p> <p>So if $y\ge2$ we have that the ratio is at least $4$ and definitely larger than $3$. So we can use this to get relative estimates. Starting at the tail of $A$ and $B$. Obviously $2^4=16=4^2$, but next we have $4^{2^4} = 4^{16} = 2^{17}$ and $2^{4^2}=2^{16}$. Next step we will have that $2^{4^{2^4}} &gt; 3\cdot4^{2^{4^2}}$, so thereby we will have that result all the way until we've reached $A$ and $B$ so we will have $A&gt;3B&gt;B$ (I assume that they all contain the same number of numbers). </p> <p>The relation between $A$ and $C$ can be seen in similar and somewhat easier way. To do this we will first make sure that the exponent chain is equally long. This is done by just replacing the tail of $C$. Obviously $2^{2^{\cdots{997}\cdots^2}} &gt; 3\cdot4$, the rest follows - so $C&gt;A$.</p> Answer 80: <p>This is an excellent exercise to practice how to deal with exponentiation. Remembering that $x\cdot x^y=x^{y+1}$ and that $(x^y)^z=x^{y\cdot z}$, let:</p> <p>$a_1=2^4=16~~~~~~a_{i+1}=2^{4^{a_{i}}}=2^{(2^2)^{a_i}}=2^{2^{\mathbf{\left(2\cdot a_{i}\right)}}}$</p> <p>$b_1=4^2=16~~~~~~b_{i+1}=4^{2^{b_{i}}}=(2^2)^{2^{b_i}}=2^{2\cdot 2^{b_i}}=2^{2^{\mathbf{\left(b_i+1\right)}}}$</p> <p>$c_1=2^{2^{2^{2}}}=2^{16}~~~c_{i+1}=2^{2^{2^{2^{c_{i}}}}}=2^{2^{\mathbf{\left(2^{2^{c_{i}}}\right)}}}$</p> <p>so that $A=a_{500}, B=b_{500}, C=c_{500}$.</p> <p>Now, if $1&lt;b_i\leq a_i\leq c_i$ then $\mathbf{(b_i+1)}\leq (a_i+1) &lt; \mathbf{(2\cdot a_i)} \leq (2\cdot c_i) &lt; 2^{c_i} &lt; \mathbf{\left( 2^{2^{c_i}}\right)}$, so if we look at what is in parentheses "on top" of the $2^2$ in the last term of the expressions of $a_{i+1}$, $b_{i+1}$, $c_{i+1}$ we have:</p> <p>$1&lt;b_1=a_1&lt;c_1 \implies 1&lt;b_2&lt;a_2&lt;c_2 \implies\dots \implies 1&lt;b_{500}&lt;a_{500}&lt;c_{500}$, i.e. </p> <blockquote> <p>$B&lt;A&lt;C$.</p> </blockquote>
Why can we resolve indeterminate forms?
2,094,596
11
https://math.stackexchange.com/questions/2094596/why-can-we-resolve-indeterminate-forms
<p>I'm questioning myselfas to why indeterminate forms arise, and why limits that apparently give us indeterminate forms can be resolved with some arithmetic tricks. Why $$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x+1}{x-1}=\frac{+\infty}{+\infty} \end{equation*} $$</p> <p>and if I do a simple operation,</p> <p>$$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(1-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(1-\frac{1}{x})}=1 \end{equation*} $$</p> <p>I understand the logic of the process, but I can't understand why we get different results by "not" changing anything.</p>
[ { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" }, { "answer_id": 2094607, "score": 23, "text": "<p>So you're looking at something of the form\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g(x)}{h(x)} $$\nand if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:\n$$f(x) = \\frac{g_1(x)}{h_1(x)} = \\frac{g_2(x)}{h_2(x)}$$\nThe limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that\n$$\\lim_{x \\to +\\infty} g_1(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_1(x)$$ do not exist, while $$\\lim_{x \\to +\\infty} g_2(x) \\quad\\mbox{and}\\quad \\lim_{x \\to +\\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>\"limit of a quotient, is the quotient of the limits; if these two limits exist\"</em>):\n$$\\lim_{x \\to +\\infty} f(x) = \\lim_{x \\to +\\infty}\\frac{g_1(x)}{h_1(x)} = \\color{blue}{ \\lim_{x \\to +\\infty}\\frac{g_2(x)}{h_2(x)} = \\frac{\\displaystyle \\lim_{x \\to +\\infty} g_2(x)}{\\displaystyle \\lim_{x \\to +\\infty} h_2(x)}} = \\cdots$$and in this way, also find $\\lim_{x \\to +\\infty} f(x)$.</p>\n\n<hr>\n\n<p>When you try to apply that rule but the individual limits do not exist, you \"go back\" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:\n$$\\begin{align}\n\\lim_{x \\rightarrow +\\infty} f(x) &amp; = \\lim_{x \\rightarrow +\\infty} \\frac{x+1}{x-1} \\color{red}{\\ne} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x+1)}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (x-1)}= \\frac{+\\infty}{+\\infty} = \\; ? \\\\[7pt]\n&amp; = \\lim_{x \\rightarrow +\\infty} \\frac{1+\\tfrac{1}{x}}{1-\\tfrac{1}{x}} \\color{green}{=} \\frac{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1+\\tfrac{1}{x})}{\\displaystyle \\lim_{x \\rightarrow +\\infty} (1-\\tfrac{1}{x})} = \\frac{1+0}{1+0} = 1 \\\\\n\\end{align}$$</p>\n" }, { "answer_id": 2094614, "score": 4, "text": "<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \\to L_f$ and $g(t) \\to L_g \\neq 0$, we have\n$$\n\\lim_{t \\to a} \\frac{f(t)}{g(t)} = \\lim_{t \\to a} Q(f(t),g(t)) = \\lim_{(x,y) \\to (L_f,L_g)}Q(x,y) = Q(L_f,L_g)\n$$\nHowever, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\\lim_{(x,y) \\to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p>\n\n<p>Similarly, $Q$ is \"discontinuous at $\\infty$\", since $\\lim_{(x,y) \\to (\\infty, \\infty)}Q(x,y)$ does not exist. So, $\\infty/\\infty$ is an indeterminate form.</p>\n" }, { "answer_id": 2094611, "score": 2, "text": "<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens.\nContinuing with your example, we have\n$$\n\"\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\frac{+\\infty}{+\\infty}\"\n$$ then <em>seing things in more details</em>:\n$$\n\\lim_{x \\rightarrow +\\infty}\n\\frac{x+1}{2x-1}=\\lim_{x \\rightarrow +\\infty}\n\\frac{x(1+\\frac{1}{x})}{x(2-\\frac{1}{x})}=\\lim_{x \\rightarrow +\\infty}\\frac{(1+\\frac{1}{x})}{(2-\\frac{1}{x})}=\\frac12.\n$$ The indeterminate form is the same, the result is different.\nAn <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>\n" } ]
Question: Why can we resolve indeterminate forms? Answer 1: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 2: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 3: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 4: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 5: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 6: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 7: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 8: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 9: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 10: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 11: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 12: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 13: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 14: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 15: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 16: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 17: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 18: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 19: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 20: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 21: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 22: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 23: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 24: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 25: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 26: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 27: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 28: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 29: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 30: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 31: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 32: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 33: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 34: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 35: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 36: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 37: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 38: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 39: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 40: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 41: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 42: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 43: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 44: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 45: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 46: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 47: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 48: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 49: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 50: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 51: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 52: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 53: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 54: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 55: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 56: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 57: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p> Answer 58: <p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p> Answer 59: <p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p> Answer 60: <p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>
Limit of the sequence $\left(\sum_{k=0}^n f\!\left(\frac{k}{n^2}\right)\right)_n$.
2,096,624
11
https://math.stackexchange.com/questions/2096624/limit-of-the-sequence-left-sum-k-0n-f-left-frackn2-right-right-n
<p><em>Following <a href="http://meta.math.stackexchange.com/questions/25653/asking-competitive-exam-questions-to-which-i-have-solutions">this post on Meta</a>, I am going to regularly ask questions from competitive mathematics exams, on a variety of topics; and provide a solution a few days later. The goal is not only to list interesting (I hope) exercises for the sake of self-study, but also to obtain (again, hopefully) a variety of techniques to solve them.</em></p> <blockquote> <p>Let <span class="math-container">$f\colon\mathbb{R}\to\mathbb{R}$</span> be differentiable at <span class="math-container">$0$</span>, and such that <span class="math-container">$f(0)=0$</span>. Letting <span class="math-container">$s_n\stackrel{\rm def}{=} \sum_{k=1}^n f\!\left(\frac{k}{n^2}\right)$</span> for <span class="math-container">$n\geq 1$</span>, find the limit of the sequence <span class="math-container">$(s_n)_{n\geq 1}$</span>.</p> </blockquote> <p><strong>Reference:</strong> Exercise 4.26 in <em>Exercices de mathématiques: oraux X-ENS (Analyse I)</em>, by Francinou, Gianella, and Nicolas (2014) ISBN 978-2842252137.</p>
[ { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" }, { "answer_id": 2096637, "score": 11, "text": "<p>Define $\\phi : \\Bbb{R} \\to \\Bbb{R}$ by</p>\n\n<p>$$ \\phi(x) = \\begin{cases}\n\\dfrac{f(x)}{x}, &amp; x \\neq 0 \\\\\nf'(0), &amp; x = 0\n\\end{cases} $$</p>\n\n<p>Then $\\phi$ is continuous at $0$ and $f(x) = x\\phi(x)$. Now your sum reduces to</p>\n\n<p>$$ s_n = \\sum_{k=0}^{n} \\frac{k}{n^2}\\phi\\left(\\frac{k}{n^2}\\right), $$</p>\n\n<p>from which we find that</p>\n\n<p>$$ \\Big( \\inf_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2} \\leq s_n \\leq \\Big( \\sup_{[0,1/n]}\\phi \\Big) \\sum_{k=0}^{n} \\frac{k}{n^2}. $$</p>\n\n<p>Taking $n \\to \\infty$, by the squeezing theorem we get</p>\n\n<p>$$ s_n \\to \\frac{1}{2}\\phi(0) = \\frac{1}{2}f'(0). $$</p>\n" }, { "answer_id": 2096639, "score": 2, "text": "<p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\\frac{1}{2}f'(0)$.</p>\n" }, { "answer_id": 2099304, "score": 1, "text": "<p><em>An answer (inspired from that of the book cited as reference in the question).</em></p>\n\n<ul>\n<li><p>First, let us try with a few simple examples:</p>\n\n<ul>\n<li><p>Identity $f\\colon x\\mapsto x$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} \\xrightarrow[n\\to\\infty]{} \\int_0^1 xdx = \\frac{1}{2}\n$$</p></li>\n<li><p>Square $f\\colon x\\mapsto x^2$\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = \\frac{1}{n^2}\\sum_{k=0}^n \\frac{k^2}{n^2} \\sim_{n\\to\\infty} \\frac{1}{n}\\int_0^1 x^2dx \\xrightarrow[n\\to\\infty]{} 0\n$$</p></li>\n</ul></li>\n</ul>\n\n<p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\\sum_{k=1}^\\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>\"it should obviously converge to $\\frac{f'(0)}{2}$\"</em>), or, well, try to continue nonetheless. </p>\n\n<ul>\n<li><p>One natural idea would be to write\n$f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\sum_{k=0}^n o\\left(\\frac{k}{n^2}\\right)\n= f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + o\\left(1\\right)\n$$\nand conclude that the limit is $\\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may \"hide\" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li>\n<li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href=\"https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem\" rel=\"nofollow noreferrer\">Taylor's theorem</a> (the Peano form of the remainer) that\n$$\nf(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \\tag{1}\n$$\nfor some $r\\colon \\mathbb{R}\\to \\mathbb{R}$ with $\\lim_{x\\to 0}r(x)=0$. This allows us to reproduce the same chain as above:\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) = f'(0)\\cdot \\frac{1}{n}\\sum_{k=0}^n \\frac{k}{n} + \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\tag{2}\n$$\nand we want to argue the second term converges to $0$. In view of this, fix $\\varepsilon &gt; 0$: by assumption, there exists $\\delta_\\varepsilon&gt;0$ such that $\\lvert x\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert r(x)\\rvert \\leq \\varepsilon$. Since $0\\leq \\frac{k}{n^2} \\leq \\frac{1}{n}$ for all $0\\leq k\\leq n$, this implies that there exists $N_\\varepsilon\\geq 0$ such that, for all $n\\geq N_\\varepsilon$,\n$$\n\\left\\lvert \\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\right\\rvert\n\\leq \\frac{1}{n^2}\\sum_{k=0}^n \\left\\lvert r\\left(\\frac{k}{n^2}\\right)\\right\\rvert k\n\\leq \\varepsilon \\frac{1}{n^2}\\sum_{k=0}^n k = \\frac{\\varepsilon}{2}\\cdot \\frac{n(n+1)}{n^2} \\leq \\varepsilon\n$$\nshowing that \n$\n\\frac{1}{n^2}\\sum_{k=0}^n r\\left(\\frac{k}{n^2}\\right)k \\xrightarrow[n\\to\\infty]{} 0\n$\nas wanted. Thus, we do get from $(2)$ that\n$$\n\\sum_{k=0}^n f\\left(\\frac{k}{n^2}\\right) \n\\xrightarrow[n\\to\\infty]{} \\boxed{\\frac{f'(0)}{2}}\\;.\n$$</p></li>\n</ul>\n" } ]
Question: Limit of the sequence $\left(\sum_{k=0}^n f\!\left(\frac{k}{n^2}\right)\right)_n$. Answer 1: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 2: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 3: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 4: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 5: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 6: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 7: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 8: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 9: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 10: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 11: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 12: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 13: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 14: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 15: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 16: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 17: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 18: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 19: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 20: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 21: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 22: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 23: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 24: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 25: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 26: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 27: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 28: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 29: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 30: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 31: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 32: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 33: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 34: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 35: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 36: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 37: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 38: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 39: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 40: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 41: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 42: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 43: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 44: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 45: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 46: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 47: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 48: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 49: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 50: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 51: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 52: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 53: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 54: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 55: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 56: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 57: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul> Answer 58: <p>Define $\phi : \Bbb{R} \to \Bbb{R}$ by</p> <p>$$ \phi(x) = \begin{cases} \dfrac{f(x)}{x}, &amp; x \neq 0 \\ f'(0), &amp; x = 0 \end{cases} $$</p> <p>Then $\phi$ is continuous at $0$ and $f(x) = x\phi(x)$. Now your sum reduces to</p> <p>$$ s_n = \sum_{k=0}^{n} \frac{k}{n^2}\phi\left(\frac{k}{n^2}\right), $$</p> <p>from which we find that</p> <p>$$ \Big( \inf_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2} \leq s_n \leq \Big( \sup_{[0,1/n]}\phi \Big) \sum_{k=0}^{n} \frac{k}{n^2}. $$</p> <p>Taking $n \to \infty$, by the squeezing theorem we get</p> <p>$$ s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0). $$</p> Answer 59: <p>Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.</p> Answer 60: <p><em>An answer (inspired from that of the book cited as reference in the question).</em></p> <ul> <li><p>First, let us try with a few simple examples:</p> <ul> <li><p>Identity $f\colon x\mapsto x$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n}\sum_{k=0}^n \frac{k}{n} \xrightarrow[n\to\infty]{} \int_0^1 xdx = \frac{1}{2} $$</p></li> <li><p>Square $f\colon x\mapsto x^2$ $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = \frac{1}{n^2}\sum_{k=0}^n \frac{k^2}{n^2} \sim_{n\to\infty} \frac{1}{n}\int_0^1 x^2dx \xrightarrow[n\to\infty]{} 0 $$</p></li> </ul></li> </ul> <p>Now, not sure how helpful this is if one does not already have an intuition for the answer, but one can also try the above more generally with a power series $f(x)=\sum_{k=1}^\infty a_k x^k$ satisfying $f(0)=0$ with non-zero radius of convergence to get $\frac{a_1}{2}$. At that point, either one recognizes a general trend (<em>"it should obviously converge to $\frac{f'(0)}{2}$"</em>), or, well, try to continue nonetheless. </p> <ul> <li><p>One natural idea would be to write $f(x) = f'(0)x + o(x)$ on a neighborhood of $0$, and then compute $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \sum_{k=0}^n o\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + o\left(1\right) $$ and conclude that the limit is $\frac{f'(0)}{2}$. Now, this is correct, but looks a little be sketchy due to the use of the Landau notation in the sum (which looks like it may "hide" some non-uniformity issues between summands) -- we will make it more rigorous below.</p></li> <li><p>Namely, since $f$ is differentiable at $0$, we can write by <a href="https://en.wikipedia.org/wiki/Taylor&#39;s_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">Taylor's theorem</a> (the Peano form of the remainer) that $$ f(x) = f(0)+f'(0)x+r(x)x= f'(0)x+r(x)x \tag{1} $$ for some $r\colon \mathbb{R}\to \mathbb{R}$ with $\lim_{x\to 0}r(x)=0$. This allows us to reproduce the same chain as above: $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) = f'(0)\cdot \frac{1}{n}\sum_{k=0}^n \frac{k}{n} + \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \tag{2} $$ and we want to argue the second term converges to $0$. In view of this, fix $\varepsilon &gt; 0$: by assumption, there exists $\delta_\varepsilon&gt;0$ such that $\lvert x\rvert \leq \delta_\varepsilon$ implies $\lvert r(x)\rvert \leq \varepsilon$. Since $0\leq \frac{k}{n^2} \leq \frac{1}{n}$ for all $0\leq k\leq n$, this implies that there exists $N_\varepsilon\geq 0$ such that, for all $n\geq N_\varepsilon$, $$ \left\lvert \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \right\rvert \leq \frac{1}{n^2}\sum_{k=0}^n \left\lvert r\left(\frac{k}{n^2}\right)\right\rvert k \leq \varepsilon \frac{1}{n^2}\sum_{k=0}^n k = \frac{\varepsilon}{2}\cdot \frac{n(n+1)}{n^2} \leq \varepsilon $$ showing that $ \frac{1}{n^2}\sum_{k=0}^n r\left(\frac{k}{n^2}\right)k \xrightarrow[n\to\infty]{} 0 $ as wanted. Thus, we do get from $(2)$ that $$ \sum_{k=0}^n f\left(\frac{k}{n^2}\right) \xrightarrow[n\to\infty]{} \boxed{\frac{f'(0)}{2}}\;. $$</p></li> </ul>
Probability: Escaping Prisoner Question
2,096,985
11
https://math.stackexchange.com/questions/2096985/probability-escaping-prisoner-question
<p><strong>Question</strong>: A prisoner in a dark dungeon discovers three tunnels leading from his cell. Unbeknownst to him, the first tunnel reaches a dead end after 50 feet and the second tunnel reaches a dead end after 20 feet, but the third tunnel leads to freedom after 100 feet. Each day, the prisoner picks a tunnel at random and crawls along it. If he reaches a dead end he has to crawl back to his cell.</p> <p>(The darkness is so complete that he might try the same tunnel on successive days).</p> <p>Find the expected distance that he crawls to reach freedom.</p> <p><strong>My Guess</strong>: I know that each tunnel has a probability of <span class="math-container">$\frac{1}{3}$</span> of being chosen.</p> <p><span class="math-container">$$ \text{If Distance to freedom} = 100(X) + 40(Y) + (100)(Z)$$</span></p> <p>Where</p> <p><span class="math-container">$$ X = \text{number of times he tries tunnel 1}$$</span></p> <p><span class="math-container">$$ Y = \text{number of times he tries tunnel 2} $$</span></p> <p><span class="math-container">$$Z = \text{number of times he tries tunnel 3}$$</span> <span class="math-container">$$ \text{ (=1 since it leads him to freedom)} $$</span></p> <p>Then</p> <p><span class="math-container">$$ E(D) = E(100X + 40Y + 100) = 100E(X) + 40E(Y) + 100$$</span></p> <p>I'm not sure if this logic is correct, and I'm a little confused on how to find E(X) and E(Y).</p> <p>Thanks!</p>
[ { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" }, { "answer_id": 2096989, "score": 7, "text": "<p>Hint: Use the total expectation formula to condition of prisoner choice.</p>\n\n<p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p>\n\n<p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p>\n\n<p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p>\n\n<ul>\n<li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li>\n<li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li>\n<li><p>in the third case it takes 100ft for the prisoner to get out</p></li>\n</ul>\n\n<p>and $P(T_1)=P(T_2)=P(T_3)=\\frac{1}{3}$</p>\n\n<p>So you can solve for $E[X]$</p>\n" }, { "answer_id": 2097000, "score": 2, "text": "<p>Let's assume he picks the tunnels with equal likelihood, i.e. $\\frac{1}{3}$ each.</p>\n\n<p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p>\n\n<p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\\frac{2}{3})^i * \\frac{1}{3}$</p>\n\n<p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p>\n\n<p>We thus get:</p>\n\n<p>$E(D) = 100 + \\Sigma_{i=1}^{\\infty} P(i)*D(i) = 100+ \\Sigma_{i=1}^{\\infty}(\\frac{2}{3})^i * \\frac{1}{3} * i * 70 = 100+ 70*\\frac{1}{3}*\\frac{2}{3}*\\Sigma_{i=1}^{\\infty}i*(\\frac{2}{3})^{i-1}= 100+\\frac{140}{9}*\\frac{1}{(1-\\frac{2}{3})^2} = 100+140=240$</p>\n" }, { "answer_id": 2097107, "score": 2, "text": "<p>I'll add another answer that I think is the simplest, and here is how I got to it:</p>\n\n<p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p>\n\n<p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p>\n\n<p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p>\n\n<p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>\n" } ]
Question: Probability: Escaping Prisoner Question Answer 1: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 2: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 3: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 4: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 5: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 6: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 7: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 8: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 9: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 10: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 11: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 12: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 13: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 14: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 15: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 16: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 17: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 18: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 19: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 20: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 21: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 22: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 23: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 24: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 25: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 26: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 27: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 28: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 29: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 30: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 31: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 32: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 33: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 34: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 35: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 36: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 37: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 38: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 39: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 40: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 41: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 42: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 43: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 44: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 45: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 46: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 47: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 48: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 49: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 50: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 51: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 52: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 53: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 54: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 55: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 56: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 57: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p> Answer 58: <p>Hint: Use the total expectation formula to condition of prisoner choice.</p> <p>If $X$ is the distance traveled and $T_1,T_2,T_3$ are the events that the prisoner chooses tunnel $1,2$ and $3$ respectively, then:</p> <p>$$E[X]=E[X|T_1]P(T_1)+E[X|T_2]P(T_2)+E[X|T_3]P(T_3)$$</p> <p>Now $E[X|T_1]=100+E[X]$, $E[X|T_2]=40+E[X]$, and $E[X|T_3]=100$</p> <ul> <li><p>in the first case after traveling 100ft the prisoner is in the same situation as in the beginning, so it takes on average E[X] distance again to get out</p></li> <li><p>in the second case it takes only 40ft for the prisoner to be in the same situation as in the beginning</p></li> <li><p>in the third case it takes 100ft for the prisoner to get out</p></li> </ul> <p>and $P(T_1)=P(T_2)=P(T_3)=\frac{1}{3}$</p> <p>So you can solve for $E[X]$</p> Answer 59: <p>Let's assume he picks the tunnels with equal likelihood, i.e. $\frac{1}{3}$ each.</p> <p>So, the prisoner definitely has to crawl that 100 feet to freedom at some day, but he can be crawling (for nothing) in those other tunnels as well with a likelihood of $\frac{2}{3}$ each day and for a distance of 70 feet on average on such a lost day.</p> <p>OK, so let $P(i)$ be the probability of the prisoner crawling for $i$ days for nothing before reaching freedom, so $P(i) = (\frac{2}{3})^i * \frac{1}{3}$</p> <p>And let $D(i)$ be the expected distance the prisoner crawls on $i$ lost days, so $D(i) = i*70$</p> <p>We thus get:</p> <p>$E(D) = 100 + \Sigma_{i=1}^{\infty} P(i)*D(i) = 100+ \Sigma_{i=1}^{\infty}(\frac{2}{3})^i * \frac{1}{3} * i * 70 = 100+ 70*\frac{1}{3}*\frac{2}{3}*\Sigma_{i=1}^{\infty}i*(\frac{2}{3})^{i-1}= 100+\frac{140}{9}*\frac{1}{(1-\frac{2}{3})^2} = 100+140=240$</p> Answer 60: <p>I'll add another answer that I think is the simplest, and here is how I got to it:</p> <p>I noticed that the answer (240 feet) was exactly the sum of going through (and back) both the 'dead-end' tunnels (100 feet and 40 feet respectively) and going through the 'freedom' tunnel once (100 feet) and I figured that was probably not a coincidence.</p> <p>Now, using @Momo's method, I quickly realized that indeed: whatever the lengths of the tunnels are, and no matter how many tunnels there are (though still with exactly one leading to freedom), if each tunnel is picked with equal probability, then the expected distance in the end will be the sum of the distances traveled by going through each tunnel once. </p> <p>OK, but even with Momo's method, there is still some algebra involved to get to this result, and I figured that there must be an even simpler argument for this result, and after some further thinking, I found it:</p> <p>Given that no matter how many days it takes for the prisoner to get out, it is true that each day the prisoner picks each tunnel with equal likelihood (this is true while the prisoner has not escaped yet, as well as when the prisoner has escaped). It thus follows that the expected number of times the prisoner goes through a tunnel is the same for each tunnel as well. But since we know the prisoner that escapes goes through the freedom tunnel exactly once, the prisoner is expected to go through each other tunnel exactly once as well!</p>
Alternative form of Eisenstein integers
2,097,658
11
https://math.stackexchange.com/questions/2097658/alternative-form-of-eisenstein-integers
<p>I just recently got into number theory and algebra so my knowledge is very limited. I stumbled upon the Eisenstein integers $\mathbb{Z}[\omega]$. They are of the form $\varepsilon = a + b \omega$, where $\omega = e^{2 \pi i/3} = -1/2 + i \sqrt{3}/2$.</p> <p>Now my problem:</p> <blockquote> <p>I want to show that you can write every Eisenstein integer in the form of $\varepsilon = \frac{a + bi \sqrt{3}}{2}$ where $a \equiv b \bmod 2.$</p> </blockquote> <p>For now I got: $\omega = -1/2 + i \sqrt{3}/2 \Rightarrow \varepsilon = a + b \omega = a - b/2 + i \sqrt{3}b/2$ then $a = b + 2k \Rightarrow b = a - 2k \Rightarrow \varepsilon = a - a/2 + k + i \sqrt{3}b/2 = \frac{a + bi\sqrt{3}}{2} + k$.</p> <p>Now I still have to deal with the superfluous $k \in \mathbb{Z}$. How is the argumentation to get rid of it?</p>
[ { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" }, { "answer_id": 2097670, "score": 8, "text": "<p>Let $p,q$ be integers. Then, as you tried, we have\n$$p+q\\omega=\np+\\dfrac{-q+qi\\sqrt 3}{2}=\n\\dfrac{2p-q+qi\\sqrt 3}{2}$$\nLet $a=2p-q$ and $b=q$. What can you conclude from here?</p>\n" }, { "answer_id": 2097897, "score": 7, "text": "<p>I think this excessive emphasis on $\\omega$ often serves to confuse students. Don't get me wrong, $\\omega$ is important. $$\\omega = -\\frac{1}{2} + \\frac{\\sqrt{-3}}{2},$$ and $N(\\omega) = 1$, which means that $\\omega$ is a unit. And furthermore $\\omega$ is a complex cubic root of 1.</p>\n\n<p>Let's review the norm function for numbers in $\\mathbb{Q}(\\sqrt{-3})$: $$N\\left(\\frac{a}{2} + \\frac{b \\sqrt{-3}}{2}\\right) = \\frac{a^2}{4} + \\frac{3b^2}{4}.$$ An algebraic number in $\\mathbb{Q}(\\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\\mathbb{Z}$.</p>\n\n<p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p>\n\n<p>If both $a$ and $b$ are odd, then $a^2 \\equiv 1 \\pmod 4$ and $b^2 \\equiv 1 \\pmod 4$ as well. But then $3b^2 \\equiv 3 \\pmod 4$, so that $a^2 + 3b^2 \\equiv 0 \\pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p>\n\n<p>Now, how to convert $$m + n \\omega = \\frac{a}{2} + \\frac{b \\sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\\mathbb{Z}$?</p>\n\n<p>Well, $a = 2m - n$ and $b = n$.</p>\n\n<p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p>\n\n<p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p>\n\n<p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p>\n\n<p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p>\n\n<hr>\n\n<p>For exercises, I suggest the following: convert the numbers $-\\omega$, $-2 + \\omega$, $3 - \\omega$ and $4 + \\omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \\omega^2$.</p>\n" }, { "answer_id": 2097679, "score": 6, "text": "<p>Your end number is $$\\frac{(a+2k)+bi\\sqrt{3}}{2}$$ which is of the right form.</p>\n\n<p>Another way to argue is that if $x+y\\sqrt{-3}$ is an integer where $x,y\\in \\mathbb{Q}$ then $x-y\\sqrt{-3}$ is also an integer and thus \n$2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\\sqrt{3}$ is irrational.</p>\n\n<p>So we have $x=\\frac{a}{2}$ and $y=\\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>\n" } ]
Question: Alternative form of Eisenstein integers Answer 1: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 2: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 3: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 4: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 5: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 6: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 7: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 8: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 9: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 10: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 11: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 12: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 13: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 14: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 15: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 16: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 17: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 18: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 19: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 20: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 21: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 22: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 23: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 24: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 25: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 26: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 27: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 28: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 29: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 30: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 31: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 32: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 33: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 34: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 35: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 36: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 37: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 38: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 39: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 40: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 41: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 42: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 43: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 44: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 45: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 46: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 47: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 48: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 49: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 50: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 51: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 52: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 53: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 54: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 55: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 56: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 57: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p> Answer 58: <p>Let $p,q$ be integers. Then, as you tried, we have $$p+q\omega= p+\dfrac{-q+qi\sqrt 3}{2}= \dfrac{2p-q+qi\sqrt 3}{2}$$ Let $a=2p-q$ and $b=q$. What can you conclude from here?</p> Answer 59: <p>I think this excessive emphasis on $\omega$ often serves to confuse students. Don't get me wrong, $\omega$ is important. $$\omega = -\frac{1}{2} + \frac{\sqrt{-3}}{2},$$ and $N(\omega) = 1$, which means that $\omega$ is a unit. And furthermore $\omega$ is a complex cubic root of 1.</p> <p>Let's review the norm function for numbers in $\mathbb{Q}(\sqrt{-3})$: $$N\left(\frac{a}{2} + \frac{b \sqrt{-3}}{2}\right) = \frac{a^2}{4} + \frac{3b^2}{4}.$$ An algebraic number in $\mathbb{Q}(\sqrt{-3})$ is also an algebraic integer if its norm is an integer from $\mathbb{Z}$.</p> <p>If both $a$ and $b$ are even, then $a^2$ and $3b^2$ are both multiples of 4, so that when you divide them by 4, you still have integers.</p> <p>If both $a$ and $b$ are odd, then $a^2 \equiv 1 \pmod 4$ and $b^2 \equiv 1 \pmod 4$ as well. But then $3b^2 \equiv 3 \pmod 4$, so that $a^2 + 3b^2 \equiv 0 \pmod 4$. Then, when you divide $a^2 + 3b^2$ by 4, you still have an integer.</p> <p>Now, how to convert $$m + n \omega = \frac{a}{2} + \frac{b \sqrt{-3}}{2},$$ with $m$ and $n$ both integers from $\mathbb{Z}$?</p> <p>Well, $a = 2m - n$ and $b = n$.</p> <p>If both $m$ and $n$ are even, we have no problem, as $a$ and $b$ are as well.</p> <p>If both $m$ and $n$ are odd, $a$ and $b$ are as well.</p> <p>If $m$ is odd but $n$ is even, we have $2m$ even and $2m - n$ is still even, so $a$ and $b$ are both even as desired.</p> <p>If $m$ is even but $n$ is odd, $2m - n$ is odd, so $a$ and $b$ are both odd as desired.</p> <hr> <p>For exercises, I suggest the following: convert the numbers $-\omega$, $-2 + \omega$, $3 - \omega$ and $4 + \omega$ to the $a$ and $b$ form. Also, compute their norms. Bonus: figure out the conversion for $m + n \omega^2$.</p> Answer 60: <p>Your end number is $$\frac{(a+2k)+bi\sqrt{3}}{2}$$ which is of the right form.</p> <p>Another way to argue is that if $x+y\sqrt{-3}$ is an integer where $x,y\in \mathbb{Q}$ then $x-y\sqrt{-3}$ is also an integer and thus $2x$ and $x^2+3y^2$ are integers. This implies that $3(2y)^2$ is an integer and thus $2y$ is an integer since $\sqrt{3}$ is irrational.</p> <p>So we have $x=\frac{a}{2}$ and $y=\frac{b}{2}$ and since $4|a^2+3b^2$ we see that $a$ and $b$ have the same parity. </p>
Six points lie on a circle
2,098,375
11
https://math.stackexchange.com/questions/2098375/six-points-lie-on-a-circle
<p>I am looking for a proof of a problem as follows:</p> <p>Let six points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on a circle. Define $C(A,B)$ be any circle through points $A, B$. Let $C(A_i,A_{i+1}) \cap C(A_{i+3}, A_{i+4}) = B_i, B_{i+3}$ we take modulo 6. Let $B_1, B_2, B_3, B_4, B_5$ lie on a circle $(C)$. Let $C(A_i,A_{i+1}) \cap C(A_{i+1}, A_{i+2}) = C_{i+1}$. Show that $B_6$ also lie on the circle $(C)$ and six points $C_1, C_2, C_3, C_4, C_5, C_6$ lie on a circle.</p> <p><a href="https://i.stack.imgur.com/6241g.png"><img src="https://i.stack.imgur.com/6241g.png" alt="enter image description here"></a></p>
[ { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" }, { "answer_id": 2104720, "score": 4, "text": "<p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2\n\\cap A_4A_5 \\, , \\,\\,\\, O_2 = A_2A_3 \\cap A_5A_6 \\, , \\,\\,\\, O_3 =\nA_3A_4 \\cap A_6A_1$$ lie on a common line (the Pascal's line)\n$p_A$. Draw the three circles $k_1\\,, \\,\\, k_2\\,,\\,\\, k_3$\ncentered at the points $O_1,\\,O_2\\,O_3$ respectively and each of\nthem orthogonal to the circle $k_A$ (the circle passing through\nthe points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$\nintersect at two common points $S$ and $S^*$, i.e. the circles\n$k_1, k_2, k_3$ are coaxial (have a common radical axis, passing\nthrough the points $S$ and $S^*$, and orthogonal to the Pascal\nline $p_A$).</p>\n\n<p>From now on, by $k(ABC)$ I denote the unique circle (or a straight\nline) passing through the points $A, B, C$.</p>\n\n<p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an\narbitrary point $B_1$ from $k_B$ and draw the pair of circles\n$k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection\npoint $B_4 (\\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also\nlies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p>\n\n<p><em>Proof:</em> By construction of circle $k_1$, if one performs an\ninversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and\n$A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to\nitself and is therefore orthogonal to $k_1$. Analogously, circles\n$k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis\ntheorem applied to the circles $k_A, \\, k(B_1A_1A_2)$ and\n$k(B_1A_4A_5)$, the three radical axes $B_1B_2, \\, A_1A_2$ and\n$A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \\cap\nA_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is\nmapped to $B_4$ under the inversion with respect to circle $k_1$.\nConsequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \\in k_B$\nits inversive image $B_4$ also lies on $k_B$. Conversely, if both\n$B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal\nto $k_1$.</p>\n\n<p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the\nplane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$\nintersect again at the second point $B_4 (\\neq B_1)$, and let the\npair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again\nat the second point $B_5 (\\neq B_2)$. Then the four points $B_1,\nB_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$\nis orthogonal to the three circles $k_1, k_2, k_3$.</em></p>\n\n<p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and\n$B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and\nonly if $k_B$ is orthogonal to both circles $k_1$ and $k_2$.\nTherefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes\nthrough the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$\nand $k_2$. However, by construction, $SS^*$ is a common radical\naxis for all three circles $k_1, \\, k_2$ and $k_3$ which means\nthat $k_B$ is orthogonal to $k_3$ as well.</p>\n\n<p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points\n$B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is\nanother (arbitrary) point on $k_B$, then the second intersection\npoint $B_6$ of the pair of circles $k(B_3A_3A_4)$ and\n$k(B_3A_6A_1)$ also lies on $k_B$.</em></p>\n\n<p><em>Proof:</em> Follows directly from Lemma 2.</p>\n\n<p>From now on, I am in the setting of Lemma 2, Corollary 3 and the\nstatement of the original problem, where points $A_1, A_2, A_3,\nA_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2,\nB_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of\nthe original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$,\nthe circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle\n$C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p>\n\n<p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\\, B_2B_5,\\, A_2C_2$ and\n$A_5C_5$ intersect in a common point $D$.</em></p>\n\n<p><em>Proof:</em> Define $D$ as the intersection point of the lines\n$B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \\,\nC(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of\nintersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is\ndifferent from $A_2$. Then their three corresponding radical axes\n$B_1B_4,\\, B_2B_5$ and $A_2C_2$ intersect at the common point\n$D$. Analogously, the three circles $k_B, \\, C(A_4A_5)$ and\n$C(A_5,A_6)$ define the three radical axes $B_1B_4, \\, B_2B_5$ and\n$A_5C_5$ which intersect at the common point $D$.</p>\n\n<p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common\ncircle $k_{25}$.</em></p>\n\n<p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and\n$A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \\cdot DB_4 =\nA_2D \\cdot DC_2.$$ Similarly, again by Lemma 4, the two chords\n$B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy\n$$B_1D \\cdot DB_4 = A_5D \\cdot DC_5.$$ Consequently\n$$ A_2D \\cdot DC_2 = B_1D \\cdot DB_4 = A_5D \\cdot DC_5$$ which is\npossible if and only if the four points $A_2, A_5, C_2$ and $C_5$\nlie on the the same circle, we call $k_{25}$.</p>\n\n<p><strong>Lemma 6.</strong> <em>The four points $C_5, \\, C_6, \\, C_1$ and $C_2$ lie\non a common circle, we call $(C)$.</em></p>\n\n<p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to\nthe circles $C(A_5,A_6), \\, C(A_6,A_1), \\, C(A_1, A_2), \\, k_{25}$\nand $k_{A}$.</p>\n\n<p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \\, A_6, \\, C_3, \\, C_6$ and second to points $A_1, \\, A_4, \\, C_1, \\, C_4$ in order to show that first the points $C_6, \\, C_1, \\, C_2$ and $C_3$ lie on a\ncommon circle, which has to be again $(C)$ and second that the\npoints $C_1, \\, C_2, \\, C_3$ and $C_4$ also lie on a common\ncircle, which is $(C)$.</p>\n\n<p><a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4ldyI.png\" width=\"600\" alt=\"Figure\"></a></p>\n" }, { "answer_id": 2103276, "score": 3, "text": "<h2>Choice of coordinates</h2>\n\n<p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p>\n\n<p>$$A_1=\\infty\\quad A_2=(a,0)\\quad A_3=(0,0)\\quad A_4=(b,0)\\quad A_5=(1,0)\\quad A_6=(c,0)$$</p>\n\n<p>I created <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html\" rel=\"nofollow noreferrer\">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p>\n\n<p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p>\n\n<p>$$B_4=\\frac1{(a-d)^2+e^2}\\begin{pmatrix}\n(a(b-d)+(a-b))(a-d)+ae^2 \\\\ (a-b)(a-1)e \\end{pmatrix}$$</p>\n\n<p>computed as <a href=\"https://math.stackexchange.com/q/2102900/35416\">the second point of intersection</a> between the <a href=\"https://math.stackexchange.com/q/1018949/35416\">corresponding circles</a>.</p>\n\n<h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href=\"https://math.stackexchange.com/a/599456/35416\">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p>\n\n<p>$$\na=1\n\\\\\\vee\\\\\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(a-c-1)(x^2+y^2) + 2cx = ac\n\\\\\\vee\\\\\n(a-c)e(x^2+y^2)\n + (-ab + bc - a + b)ex\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y\n + (a-b)ce=0\n$$</p>\n\n<p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p>\n\n<p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p>\n\n<h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2>\n\n<p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p>\n\n<p>$$\n(d-a)^2+e^2=(1-a)(b-a)\n\\\\\\vee\\\\\n(x'-c)^2+y'^2=(c-b)c\n\\\\\\vee\\\\\n(a-c)e(x'^2+y'^2)\n + (-ab + bc - a + b)ex'\\\\\n + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y'\n + (a-b)ce=0\n$$</p>\n\n<p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p>\n\n<h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1>\n\n<p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p>\n\n<h2>Full log of computations</h2>\n\n<p>I computed the things I described above with a lot of help from <a href=\"http://www.sagemath.org/\" rel=\"nofollow noreferrer\">Sage</a>. I included the <a href=\"http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation\" rel=\"nofollow noreferrer\">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p>\n" }, { "answer_id": 2101262, "score": 1, "text": "<p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p>\n\n<p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p>\n\n<p>$$\\frac{(A-C)(B-D)}{(A-D)(B-C)}\\in\\mathbb R\\;.$$</p>\n\n<p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p>\n\n<p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\\\\n\\text{with}\\quad x_{AC}=x_{BD}=1\\quad\\text{and}\\quad x_{AD}=x_{BC}=-1\\;.$$\nMultiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\\binom{6\\cdot 3}{2}=153$ dimensions. You have up to $\\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\\times330$ matrix of rank $68$.</p>\n\n<p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href=\"http://mathworld.wolfram.com/MiquelConfiguration.html\" rel=\"nofollow noreferrer\">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>\n" } ]
Question: Six points lie on a circle Answer 1: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 2: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 3: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 4: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 5: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 6: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 7: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 8: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 9: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 10: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 11: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 12: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 13: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 14: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 15: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 16: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 17: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 18: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 19: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 20: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 21: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 22: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 23: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 24: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 25: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 26: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 27: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 28: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 29: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 30: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 31: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 32: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 33: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 34: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 35: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 36: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 37: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 38: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 39: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 40: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 41: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 42: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 43: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 44: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 45: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 46: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 47: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 48: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 49: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 50: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 51: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 52: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 53: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 54: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 55: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 56: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 57: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p> Answer 58: <p>By Pascal's theorem the three intersection points $$O_1 = A_1A_2 \cap A_4A_5 \, , \,\,\, O_2 = A_2A_3 \cap A_5A_6 \, , \,\,\, O_3 = A_3A_4 \cap A_6A_1$$ lie on a common line (the Pascal's line) $p_A$. Draw the three circles $k_1\,, \,\, k_2\,,\,\, k_3$ centered at the points $O_1,\,O_2\,O_3$ respectively and each of them orthogonal to the circle $k_A$ (the circle passing through the points $A_1, A_2, A_3, A_4, A_5, A_6$). Then, $k_1, k_2, k_3$ intersect at two common points $S$ and $S^*$, i.e. the circles $k_1, k_2, k_3$ are coaxial (have a common radical axis, passing through the points $S$ and $S^*$, and orthogonal to the Pascal line $p_A$).</p> <p>From now on, by $k(ABC)$ I denote the unique circle (or a straight line) passing through the points $A, B, C$.</p> <p><strong>Lemma 1.</strong> <em>Let $k_B$ be a circle different from $k_A$. Take an arbitrary point $B_1$ from $k_B$ and draw the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$. Then the second intersection point $B_4 (\neq B_1)$ of $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ also lies on $k_B$ if and only if $k_B$ orthogonal to circle $k_1$.</em></p> <p><em>Proof:</em> By construction of circle $k_1$, if one performs an inversion with respect to $k_1$, then $A_1$ is mapped to $A_2$ and $A_4$ is mapped to $A_5$. Thus, circle $k(B_1A_1A_2)$ is mapped to itself and is therefore orthogonal to $k_1$. Analogously, circles $k(B_1A_4A_5)$ is also orthogonal to $k_1$. By the radical axis theorem applied to the circles $k_A, \, k(B_1A_1A_2)$ and $k(B_1A_4A_5)$, the three radical axes $B_1B_2, \, A_1A_2$ and $A_4A_5$ intersect at a common point, which is $O_1 = A_1A_2 \cap A_4A_5$ -- the center of circle $k_1$. Therefore, point $B_1$ is mapped to $B_4$ under the inversion with respect to circle $k_1$. Consequently, if $k_B$ is orthogonal to $k_1$, since $B_1 \in k_B$ its inversive image $B_4$ also lies on $k_B$. Conversely, if both $B_1$ and $B_4$ lie on $k_B$, then the circle $k_B$ is orthogonal to $k_1$.</p> <p><strong>Lemma 2.</strong> <em>Let $B_1$ and $B_2$ be two different points in the plane. Let the pair of circles $k(B_1A_1A_2)$ and $k(B_1A_4A_5)$ intersect again at the second point $B_4 (\neq B_1)$, and let the pair of circles $k(B_2A_2A_3)$ and $k(B_2A_5A_6)$ intersect again at the second point $B_5 (\neq B_2)$. Then the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$ if and only if $k_B$ is orthogonal to the three circles $k_1, k_2, k_3$.</em></p> <p><em>Proof:</em> Let $k_B$ be a circle passing through points $B_1$ and $B_2$. By Lemma 1, points $B_4$ and $B_5$ also lie on $k_B$ if and only if $k_B$ is orthogonal to both circles $k_1$ and $k_2$. Therefore, the radical axis $SS^*$ of $k_1$ and $k_2$ passes through the center of $k_B$ whenever $k_B$ is orthogonal to $k_1$ and $k_2$. However, by construction, $SS^*$ is a common radical axis for all three circles $k_1, \, k_2$ and $k_3$ which means that $k_B$ is orthogonal to $k_3$ as well.</p> <p><strong>Corollary 3.</strong> <em>In the setting of Lemma 2, let the four points $B_1, B_2, B_4, B_5$ lie on a common circle $k_B$. If $B_3$ is another (arbitrary) point on $k_B$, then the second intersection point $B_6$ of the pair of circles $k(B_3A_3A_4)$ and $k(B_3A_6A_1)$ also lies on $k_B$.</em></p> <p><em>Proof:</em> Follows directly from Lemma 2.</p> <p>From now on, I am in the setting of Lemma 2, Corollary 3 and the statement of the original problem, where points $A_1, A_2, A_3, A_4, A_5, A_6$ lie on the circle $k_A$ and the points $B_1, B_2, B_3, B_4, B_5, B_6$ lie on the circle $k_B$. In the notations of the original problem $C(A_6,A_1)$ is the circle $k(B_3A_6A_1)$, the circle $C(A_1,A_2)$ is the circle $k(B_1A_1A_2)$, the circle $C(A_5,A_6)$ is the circle $k(B_2A_5A_6)$ etc.</p> <p><strong>Lemma 4.</strong> <em>The four lines $B_1B_4,\, B_2B_5,\, A_2C_2$ and $A_5C_5$ intersect in a common point $D$.</em></p> <p><em>Proof:</em> Define $D$ as the intersection point of the lines $B_1B_4$ and $B_2B_5$. Let us look at the three circles $k_B, \, C(A_1A_2)$ and $C(A_2,A_3)$, where $C_2$ is the second point of intersection of $C(A_1A_2)$ and $C(A_2,A_3)$, the one that is different from $A_2$. Then their three corresponding radical axes $B_1B_4,\, B_2B_5$ and $A_2C_2$ intersect at the common point $D$. Analogously, the three circles $k_B, \, C(A_4A_5)$ and $C(A_5,A_6)$ define the three radical axes $B_1B_4, \, B_2B_5$ and $A_5C_5$ which intersect at the common point $D$.</p> <p><strong>Lemma 5.</strong> <em>The four points $A_2, A_5, C_2, C_5$ lie on a common circle $k_{25}$.</em></p> <p><em>Proof:</em> As a consequence of Lemma 4, the two chords $B_1B_4$ and $A_2C_2$ of the circle $C(A_1,A_2)$ satisfy $$B_1D \cdot DB_4 = A_2D \cdot DC_2.$$ Similarly, again by Lemma 4, the two chords $B_1B_4$ and $A_5C_5$ of the circle $C(A_4,A_5)$ satisfy $$B_1D \cdot DB_4 = A_5D \cdot DC_5.$$ Consequently $$ A_2D \cdot DC_2 = B_1D \cdot DB_4 = A_5D \cdot DC_5$$ which is possible if and only if the four points $A_2, A_5, C_2$ and $C_5$ lie on the the same circle, we call $k_{25}$.</p> <p><strong>Lemma 6.</strong> <em>The four points $C_5, \, C_6, \, C_1$ and $C_2$ lie on a common circle, we call $(C)$.</em></p> <p><em>Proof:</em> It follows from Lemma 5 and Miquel's theorem applied to the circles $C(A_5,A_6), \, C(A_6,A_1), \, C(A_1, A_2), \, k_{25}$ and $k_{A}$.</p> <p>Finally, we can apply Lemmas 5 and 6 consecutively first to points $A_3, \, A_6, \, C_3, \, C_6$ and second to points $A_1, \, A_4, \, C_1, \, C_4$ in order to show that first the points $C_6, \, C_1, \, C_2$ and $C_3$ lie on a common circle, which has to be again $(C)$ and second that the points $C_1, \, C_2, \, C_3$ and $C_4$ also lie on a common circle, which is $(C)$.</p> <p><a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#another-figure" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ldyI.png" width="600" alt="Figure"></a></p> Answer 59: <h2>Choice of coordinates</h2> <p>I like coordinates. The incidence relations you are interested in are invariant under Möbius transformations, so a natural setup for this problem would be the one-point compactification of the plane. Think $\mathbb{CP}^1$ if you want to, but I'll not use complex numbers for coordinates in this answer. Since your setup is invariant under Möbius transformations, you can choose specific coordinates of three points without loss of generality. So I'll choose $A_1$ to be the point at infinity, and fixed coordinates on the $x$ axis for two other points. Thus the “circle” through all the $A_i$ is the $x$ axis in my setup, and I can use parameters for the $x$ coordinates of the remaining points.</p> <p>$$A_1=\infty\quad A_2=(a,0)\quad A_3=(0,0)\quad A_4=(b,0)\quad A_5=(1,0)\quad A_6=(c,0)$$</p> <p>I created <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html" rel="nofollow noreferrer">an interactive illustration</a> of this specific version of your configuration, where you can move the points around and see the configuration change accordingly. I alternated between fixed and variable points in an attempt to make things more symmetric, in the hope of making individual expressions not too complicated.</p> <p>Next we can pick an arbitrary point $B_1=(d,e)$ anywhere in the plane. This results in the</p> <p>$$B_4=\frac1{(a-d)^2+e^2}\begin{pmatrix} (a(b-d)+(a-b))(a-d)+ae^2 \\ (a-b)(a-1)e \end{pmatrix}$$</p> <p>computed as <a href="https://math.stackexchange.com/q/2102900/35416">the second point of intersection</a> between the <a href="https://math.stackexchange.com/q/1018949/35416">corresponding circles</a>.</p> <h2>First condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>Choosing $B_2=(x,y)$ will define the circle through $B_1,B_2,B_4$. On the other hand this allows us to construct $B_5$ as the intersection of $C(A_2,A_3)$ and $C(A_5,A_6)$. We can and check whether that point lies on the same circle. In general it does not. The <a href="https://math.stackexchange.com/a/599456/35416">condition</a> for $B_1,B_2,B_4,B_5$ to be cocircular can be factored into the following independent conditions:</p> <p>$$ a=1 \\\vee\\ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (a-c-1)(x^2+y^2) + 2cx = ac \\\vee\\ (a-c)e(x^2+y^2) + (-ab + bc - a + b)ex\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y + (a-b)ce=0 $$</p> <p>The first three of these conditions can be seen as non-degeneracy constraints. If $a=1$ then $A_2=A_5$, which is a degenerate situation. The second condition characterizes the situations where $B_1$ and $B_4$ would coincide, while the third is when $B_2$ and $B_5$ would be the same. So if we can assume that all points in the configuration are distinct, then we can concentrate on the last condition.</p> <p>It describes a circle, so we now know that the point $B_2$ must lie on a certain circle if $B_5$ is to be cocircular with $B_1,B_2,B_4$. Actually this circle already passes through $B_1$ and $B_4$. So knowing all the $A_i$ and $B_1$ already fixes the circle on which all the $B_i$ must lie.</p> <h2>Second condition for cocircularity of the <em>B</em><sub><em>i</em></sub></h2> <p>We can play the same game for $B_3$ and $B_6$ instead of $B_2$ and $B_5$. What's the conditions for $B_1,B_3,B_4,B_6$ being cocircular if we choose $B_3=(x',y')$?</p> <p>$$ (d-a)^2+e^2=(1-a)(b-a) \\\vee\\ (x'-c)^2+y'^2=(c-b)c \\\vee\\ (a-c)e(x'^2+y'^2) + (-ab + bc - a + b)ex'\\ + ((a-c)((b-d)d - e^2) - (a-b)(c-d))y' + (a-b)ce=0 $$</p> <p>The last conditon is the same as above, so if the cocircularity condition is satisfied for $B_1,B_2,B_4,B_5$ and all points are distinct, and if $B_3$ is chosen from that same circle, too, then the cocircularity condition must be satisfied for $B_1,B_3,B_4,B_6$ as well, and all six $B_i$ are cocircular.</p> <h1>Conditions for cocircularity of the <em>C</em><sub><em>i</em></sub></h1> <p>A similar game can be played for the $C_i$ as well. Their coordinates are considerably more complicated, though, and so are the conditions for cocircularity of four of these. But assuming the $B_i$ to be cocircular (i.e. assuming the respective last condition to hold for $B_2$ and $B_3$), one can demonstrate that all the points must be cocircular, too. In particular, if the condition for $B_2$ holds, then one can show that $(C_1,C_2,C_5,C_6)$ and $(C_2,C_3,C_4,C_5)$ are cocircular, and if the condition for $B_3$ holds, then $(C_1,C_2,C_3,C_4)$ and $(C_1,C_4,C_5,C_6)$ are cocircular. If both hold, then all points must be cocircular. You could start by defining the circle via $C_1,C_2,C_3$, then add $C_4$ due to $(C_1,C_2,C_3,C_4)$, add $C_5$ due to $(C_2,C_3,C_4,C_5)$ and add $C_6$ due to $(C_1,C_4,C_5,C_6)$. All again assuming distinct points, of course.</p> <h2>Full log of computations</h2> <p>I computed the things I described above with a lot of help from <a href="http://www.sagemath.org/" rel="nofollow noreferrer">Sage</a>. I included the <a href="http://gagern.github.io/CindyJS/se/math/2017/01/18/eight-circles.html#my-computation" rel="nofollow noreferrer">full log of my Sage session</a>, including descriptions, in the blog post I created for the interactive illustration.</p> Answer 60: <p><em>This is an idea I had, but it doesn't seem to work as I had hoped.</em></p> <p>Here is one approach, building on the help of computer algebra and programming. Four points $A,B,C,D$ in the complex plane (or its one-point compactification) are cocircular iff their cross ratio is real:</p> <p>$$\frac{(A-C)(B-D)}{(A-D)(B-C)}\in\mathbb R\;.$$</p> <p>Now you can translate both the cocircularities you assume and the ones you want to prove into fractions of this kind. If you find a way to multiply fractions from known cocircularities in such a way that most terms cancel out and you are left with a fraction of one of the unknown cocircularities, then you have a proof, as the product of real numbers will be a real number again. But how do you find such a way to multiply terms?</p> <p>Think about this in a logarithmic way. The fraction above can be written as $$(A-C)^{x_{AC}}(B-D)^{x_{BD}}(A-D)^{x_{AD}}(B-C)^{x_{BC}}\\ \text{with}\quad x_{AC}=x_{BD}=1\quad\text{and}\quad x_{AD}=x_{BC}=-1\;.$$ Multiplying terms of this form means adding the corresponding values in the exponents. So you can think of this as a vector space: you have one dimension for each unordered pair of points, i.e. $\binom{6\cdot 3}{2}=153$ dimensions. You have up to $\binom{6}{4}=15$ known cocircularities for each of your circles, and you have essentially $3$ different vectors you can build from these, identifying vectors with their negatives. I let my CAS do this for me, and came up with a $153\times330$ matrix of rank $68$.</p> <p>Now you can check whether one of the vectors for the cocircularities you want to prove is in the span of this matrix. Unfortunately that does not seem to be the case. Pity! <a href="http://mathworld.wolfram.com/MiquelConfiguration.html" rel="nofollow noreferrer">Miquel's six circle theorem</a> can be proven by this approach, and this one here does look a bit like an extended version of that one, so my hopes were high.</p>
How to imagine vector spaces (and projective spaces) over a finite field
2,099,192
11
https://math.stackexchange.com/questions/2099192/how-to-imagine-vector-spaces-and-projective-spaces-over-a-finite-field
<p>So I have been learning about projective spaces for the last few hours, and I think I understand the basics pretty well, but there is an exercise, which I do not know how to solve at all. It comes down to being able to imagine and understand vector spaces and projective spaces over a finite field.</p> <p>Let's say, my projective space is a set of all the one dimensional subspaces of my vector space. Then, each line in my projective space has at least one, most of the time multiple representants (in other words: direction vectors) that span the line. The picture below shows the projective plane over a finite field with 3 elements (the Galois-Field $GF(3)$). I would now like to find a description for the points in this pictures, whereas each point should have some corresponding representants. How do I determine which vector belongs to which point?</p> <p><a href="https://i.stack.imgur.com/8TyQC.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/8TyQC.gif" alt="enter image description here"></a></p> <p>In $\mathbb{R}^3$ I can very well see that a vector $(1,0,0)$ spans my first axis, but what about a finite field? Any help is greatly appreciated!</p>
[ { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" }, { "answer_id": 2099509, "score": 8, "text": "<p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \\Bbb F_q^3$ over $\\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p>\n\n<p><strong>One-dimensional subspaces</strong><br>\nLet's start with the fact that a one-dimensional subspace $L$ has the form $L = \\{c \\mathbf{v} : c \\in \\Bbb F_q\\}$; it consists of all scalar multiples of some vector $\\mathbf{v} \\in V$. In particular, there are as many points in $L$ as there are scalars in $\\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p>\n\n<p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\\mathbf{v}_1$ and $\\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the \"span of a set of vectors\") $L$.</p>\n\n<p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p>\n\n<p>$$\\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p>\n\n<p>one dimensional subspaces of the three-dimensional vector space over $\\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\\ \\checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p>\n\n<p><strong>Two-dimensional Subspaces</strong><br>\nTo think about lines in the picture, we need to understand two-dimensional subspaces of $\\Bbb F_3^3$. Every pair of nonzero vectors $\\mathbf{v}, \\mathbf{w}$ where $\\mathbf{v} \\neq c \\mathbf{w}$ determines a two-dimensional subspace of the form $P = \\{c_1 \\mathbf{v} + c_2 \\mathbf{w} : c_1, c_2 \\in \\Bbb F_q\\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\\mathbf{v}, \\mathbf{w})$ that span $P$.</p>\n\n<p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p>\n\n<p>$$\\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p>\n\n<p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a \"natural\" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p>\n\n<p><strong>Specific one-dimensional subspaces</strong><br>\nTo list the points of this projective plane, we can list $13$ vectors in $\\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p>\n\n<p>\\begin{array}{lll}\n (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\\\\n (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\\\\n(0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\\\\n(c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\\\\n &amp; (c, c, c), &amp; \\\\\n\\end{array}</p>\n\n<p>where here $(2c, c, c)$ means $\\{c(2, 1, 1) : c \\in \\Bbb F_3\\}$.</p>\n\n<p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p>\n\n<p><strong>Specific two-dimensional subspaces</strong><br>\nThese are messier. It's easy to pick two linearly independent vectors $\\mathbf{v}, \\mathbf{w}$ and write some subspaces $\\operatorname{Span}(\\{\\mathbf{v}, \\mathbf{w}\\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p>\n\n<p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\\{(x, y, z) : ax + by + cz = 0\\}$ (just like you're used to with vector spaces over $\\Bbb R$), and each such equation is \"uniquely\" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p>\n\n<p>\\begin{array}{lll}\n(0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\\\\n(b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\\\\n(b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\\\\n(a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\\\\n &amp; (a, b, 2a + 2b), &amp; \\\\\n\\end{array}</p>\n\n<p>where, for instance, $(a, b, a)$ represents the subspace $\\{a(1, 0, 1) + b(0, 1, 0) : a, b \\in \\Bbb F_3\\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \\cdot (a, b, a) = 2ac + ac = 3ac \\equiv 0 \\pmod 3$.</p>\n\n<p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p>\n\n<p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href=\"http://www.uwyo.edu/moorhouse/pub/planes/\">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p>\n\n<p><strong>A note of caution</strong><br>\nI didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\\Bbb Z /p\\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work \"quickly\" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p>\n\n<p>Hopefully you've played around with the Fano plane too, things are very nice there!</p>\n" }, { "answer_id": 2099604, "score": 5, "text": "<p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a \"1\" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in\n$$\\langle (1,0,0) \\rangle$$\nor to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as\n$$(1:0:0)$$\nand the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\\langle (1,0,0) \\rangle$, $\\langle (0,1,0) \\rangle$, $\\langle (0,0,1) \\rangle$.</p>\n\n<p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\\langle (1,0,0) \\rangle$ and $\\langle (0,1,0) \\rangle$ would be denoted $\\langle (1,0,0), (0,1,0) \\rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\\langle (1,1,0) \\rangle$ and $\\langle (1,2,0) \\rangle$. We can get unique representatives from the line $\\langle \\bf{u}, \\bf{v} \\rangle$ by considering the vectors $\\{\\bf{v}\\} \\cup \\{\\bf{u} + \\lambda \\bf{v} \\ : \\ \\lambda \\in \\mathbb{F} \\}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p>\n\n<p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>\n" } ]
Question: How to imagine vector spaces (and projective spaces) over a finite field Answer 1: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 2: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 3: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 4: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 5: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 6: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 7: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 8: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 9: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 10: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 11: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 12: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 13: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 14: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 15: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 16: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 17: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 18: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 19: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 20: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 21: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 22: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 23: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 24: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 25: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 26: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 27: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 28: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 29: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 30: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 31: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 32: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 33: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 34: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 35: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 36: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 37: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 38: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p> Answer 39: <p>Since your question specifically mentions projective planes, let's suppose that we have the three-dimensional vector space $V = \Bbb F_q^3$ over $\Bbb F_q$, the finite field with $q$ elements (that is, $GF(q)$ in your notation).</p> <p><strong>One-dimensional subspaces</strong><br> Let's start with the fact that a one-dimensional subspace $L$ has the form $L = \{c \mathbf{v} : c \in \Bbb F_q\}$; it consists of all scalar multiples of some vector $\mathbf{v} \in V$. In particular, there are as many points in $L$ as there are scalars in $\Bbb F_q$: there are $q$ of them, and $q - 1$ are nonzero.</p> <p>But as you've pointed out, there aren't as many one-dimensional subspaces of $V$ as there are nonzero vectors in $V$ (when $V$ is finite, at least); two vectors $\mathbf{v}_1$ and $\mathbf{v}_2 $ determine the same subspace when they are scalar multiples of each other. Put another way, any of the $q - 1$ nonzero vectors in a one-dimensional subspace $L$ generate (in the sense of the "span of a set of vectors") $L$.</p> <p>Thus, by counting nonzero vectors in $V$, we've overcounted one-dimensional subspaces by a factor of $q - 1$, so there are</p> <p>$$\frac{q^3 - 1}{q - 1} = 1 + q + q^2$$</p> <p>one dimensional subspaces of the three-dimensional vector space over $\Bbb F_q$. In your projective plane example, we have $q = 3$ for a total of $1 + 3 + 9 = 13\ \checkmark$ one-dimensional subspaces of $V$, or points in the projective plane.</p> <p><strong>Two-dimensional Subspaces</strong><br> To think about lines in the picture, we need to understand two-dimensional subspaces of $\Bbb F_3^3$. Every pair of nonzero vectors $\mathbf{v}, \mathbf{w}$ where $\mathbf{v} \neq c \mathbf{w}$ determines a two-dimensional subspace of the form $P = \{c_1 \mathbf{v} + c_2 \mathbf{w} : c_1, c_2 \in \Bbb F_q\}$. This subspace contains $q^2$ vectors, and $q^2 - 1$ of them are nonzero. As we noted above, not all pairs of vectors in $P$ generate $P$: we must be careful that they do not lie on the same one-dimensional subspace. We can pick any nonzero vector in $P$ to start, and then any vector not in the span of the first; there are $(q^2 - 1)(q - 1)$ pairs $(\mathbf{v}, \mathbf{w})$ that span $P$.</p> <p>To count two-dimensional subspaces, we pick two nonzero vectors in $V$ not lying in the same one-dimensional subspace, and then take care of overcounting that arises due to multiple ways to generate the same plane: there are</p> <p>$$\frac{(q^3 - 1)(q^3 - q)}{(q^2 - 1)(q^2 - q)} = 1 + q + q^2$$ </p> <p>two-dimensional subspaces of three-dimensional $V$, the same number of one-dimensional subspaces! <em>Food for thought: wouldn't it be nice if there was a "natural" bijection between the one- and two-dimensional subspaces, in our three-dimensional space?</em></p> <p><strong>Specific one-dimensional subspaces</strong><br> To list the points of this projective plane, we can list $13$ vectors in $\Bbb F_3^3$ that don't lie (pairwise) in the same one-dimensional subspace:</p> <p>\begin{array}{lll} (c, 0, 0), &amp; (0, c, 0), &amp; (0, 0, c) \\ (0, c, c), &amp; (c, 0, c), &amp; (c, c, 0) \\ (0, 2c, c), &amp; (2c, 0, c), &amp; (2c, c, 0) \\ (c, c, 2c), &amp; (c, 2c, c), &amp; (c, c, 2c) \\ &amp; (c, c, c), &amp; \\ \end{array}</p> <p>where here $(2c, c, c)$ means $\{c(2, 1, 1) : c \in \Bbb F_3\}$.</p> <p>To get some experience with vector spaces over finite fields, you should actually write out elements in some of these subspaces, and convince yourself that all $26$ nonzero vectors of $\Bbb F_3^3$ live in exactly one of these one-dimensional subspaces.</p> <p><strong>Specific two-dimensional subspaces</strong><br> These are messier. It's easy to pick two linearly independent vectors $\mathbf{v}, \mathbf{w}$ and write some subspaces $\operatorname{Span}(\{\mathbf{v}, \mathbf{w}\})$, but listing them all, and without repeats, is a bit tedious (and not exactly easy!). </p> <p>Instead, recall that two-dimensional subspaces can always be described as the solution set to a linear equation $\{(x, y, z) : ax + by + cz = 0\}$ (just like you're used to with vector spaces over $\Bbb R$), and each such equation is "uniquely" determined by a vector $(a, b, c)$ normal to the subspace. This is great, because we just computed all the one-dimensional subspaces! With a bit of work, we can write down what generic elements in our two-dimensional subspaces look like:</p> <p>\begin{array}{lll} (0, a, b), &amp; (a, 0, b), &amp; (a, b, 0) \\ (b, a, 2a), &amp; (a, b, 2a), &amp; (a, 2a, b) \\ (b, a, a), &amp; (a, b, a), &amp; (a, a, b) \\ (a, b, a + b), &amp; (a, a+b, b), &amp; (a+b, a, b) \\ &amp; (a, b, 2a + 2b), &amp; \\ \end{array}</p> <p>where, for instance, $(a, b, a)$ represents the subspace $\{a(1, 0, 1) + b(0, 1, 0) : a, b \in \Bbb F_3\}$. Additionally, it's in the same place as $(2c, 0, c)$ in the previous table. This signifies that the subspaces are orthogonal: $(2c, 0, c) \cdot (a, b, a) = 2ac + ac = 3ac \equiv 0 \pmod 3$.</p> <p>Again to build familiarity, it would be good to pick a subspace or two and write out all vectors in these subspaces, as well as the one-dimensional subspaces it contains.</p> <p>The good news here is that projective planes have <em>lots</em> of symmetry (in fact, yours has <a href="http://www.uwyo.edu/moorhouse/pub/planes/">5616</a> automorphisms!); you can start building a correspondence between vectors and points in the image freely, although before long, you will have to be careful about how you're constructing things (to make sure points on the same line correspond to one-dimensional subspaces in a common two-dimensional subspace). It's probably better to create the image from scratch, using the picture as a guide for what should happen.</p> <p><strong>A note of caution</strong><br> I didn't write out things for a general $n$-dimensional vector space, because things get more complicated. Another source of difficulty is the structure of $GF(p^n)$ for prime $p$ and $n &gt; 1$. When you're working over $GF(p)$ for prime $p$, things are nice: You're just working in the ring $\Bbb Z /p\Bbb Z$, doing arithmetic modulo $p$. But this stops being the case in $GF(p^n)$, and I personally don't know how to work "quickly" in such a field, which is often constructed as a quotient of the polynomial ring $GF(p)[X]$ by an irreducible polynomial of degree $n$. </p> <p>Hopefully you've played around with the Fano plane too, things are very nice there!</p> Answer 40: <p>There are some decisions to be made when we think of writing representatives for points in a finite projective plane, because in general each point has more than one representing vector. For this reason, we usually represent the points with vectors that have been scaled to have a "1" as the first nonzero entry (or maybe as the last nonzero entry in some books). To make it clear that we mean this to represent a subspace, two common notations are to use angled brackets (to refer to the span of a vector) as in $$\langle (1,0,0) \rangle$$ or to use what are known as <em>homogeneous coordinates</em>; these are usually written with colons between the entries such as $$(1:0:0)$$ and the convention is that we ignore differences of a scalar multiple. Because there are so many symmetries of a finite projective plane, you can choose any three points (that are not all on the same line) to label $\langle (1,0,0) \rangle$, $\langle (0,1,0) \rangle$, $\langle (0,0,1) \rangle$.</p> <p>Now when you want to fill in vector representatives for the other points, just work on filling them in. For example, the line containing $\langle (1,0,0) \rangle$ and $\langle (0,1,0) \rangle$ would be denoted $\langle (1,0,0), (0,1,0) \rangle$; it contains vectors represented by linear combinations of these two vectors, so we see that the other two points we need to fill in are $\langle (1,1,0) \rangle$ and $\langle (1,2,0) \rangle$. We can get unique representatives from the line $\langle \bf{u}, \bf{v} \rangle$ by considering the vectors $\{\bf{v}\} \cup \{\bf{u} + \lambda \bf{v} \ : \ \lambda \in \mathbb{F} \}$ (we would then scale each vector if necessary to make the first nonzero entry equal to 1).</p> <p>For the most part you can choose which points to label with which coordinates in the beginning, as you work further in you may have to set up a system with a few linear equations and then solve it to make sure your labelings are consistent.</p>
Function satisfying $\lim_{x\to 0}\frac{f(ax)}{f(x)}=a$
2,103,164
11
https://math.stackexchange.com/questions/2103164/function-satisfying-lim-x-to-0-fracfaxfx-a
<p>In this answer <a href="https://math.stackexchange.com/a/2101475/72031">https://math.stackexchange.com/a/2101475/72031</a> I proved that we can sometimes avoid the limit $$\lim_{x\to 0}\frac{\sin x} {x} = 1\tag{1}$$ and instead use the simpler limit $$\lim_{x\to 0}\cos x=1\tag{2}$$ to evaluate many limits. Specifically we can use $(2)$ to show that $$\lim_{x\to 0}\frac{\sin nx} {\sin x} =n\tag{3}$$ for all rational $n$. My contention is that the above limit can not be established for irrational values of $n$ just by using $(2)$ and it necessarily requires the use of $(1)$.</p> <p>Based on this thought I pose the following problem:</p> <blockquote> <p>Let $f:\mathbb{R} \to\mathbb{R} $ be continuous with $f(0)=0$ and $$\lim_{x\to 0}\frac{f(ax)}{f(x)}=a\tag{4}$$ for all non-zero real values of $a$. Does this imply that $f'(0)$ exist and is nonzero? </p> </blockquote> <p>I think this is false, but I could not find an easy counter-example. So either a proof or a counter-example is desired. </p>
[ { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" }, { "answer_id": 2103192, "score": 6, "text": "<p>I try a counter-example:</p>\n\n<p>Let first be $g(x)=\\sqrt{|\\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\\to 0$ if $x\\to 0$, $x\\not =0$. (Because for $|x|$ small, $|\\log |a|+\\log |x||=-\\log |x|-\\log |a|$). Hence if we put $f(x)=x\\exp(g(x))$ for $x\\not =0$, and $f(0)=0$, we are done.</p>\n" }, { "answer_id": 2103324, "score": 1, "text": "<p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>\n" } ]
Question: Function satisfying $\lim_{x\to 0}\frac{f(ax)}{f(x)}=a$ Answer 1: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 2: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 3: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 4: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 5: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 6: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 7: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 8: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 9: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 10: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 11: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 12: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 13: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 14: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 15: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 16: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 17: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 18: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 19: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 20: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 21: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 22: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 23: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 24: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 25: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 26: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 27: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 28: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 29: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 30: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 31: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 32: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 33: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 34: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 35: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 36: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 37: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 38: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p> Answer 39: <p>I try a counter-example:</p> <p>Let first be $g(x)=\sqrt{|\log (|x|)|}$. For any $a$ not zero $g(ax)-g(x)\to 0$ if $x\to 0$, $x\not =0$. (Because for $|x|$ small, $|\log |a|+\log |x||=-\log |x|-\log |a|$). Hence if we put $f(x)=x\exp(g(x))$ for $x\not =0$, and $f(0)=0$, we are done.</p> Answer 40: <p>However the previous answer is correct this gives a very simple one if $a $ may only be positive, then $f=|x|$ is also continuous and the limit of the ratio is also $a $ and the derivative doesn't exist in 0</p>
Can we always use the language of set theory to talk about functions?
2,103,839
11
https://math.stackexchange.com/questions/2103839/can-we-always-use-the-language-of-set-theory-to-talk-about-functions
<p>A relation is a set, and a function is a specific kind of relation. Therefore a function is a set also. </p> <p>However, at the levels of math I've studied (undergraduate only), I get the feeling people don't talk about functions as sets in the purest sense.</p> <p>With the aim of better understanding whether there's some sort of informal line between functions and sets, <strong>here's one question I have:</strong></p> <p>Let $f(x)=\sqrt{x}$. (With domain all real numbers greater than or equal to 0)</p> <p>Let $g(x)=\sqrt{|x|}$. (With domain all real numbers)</p> <p><strong>Is it correct to say $f \subseteq g$ ? How about $f(x) \subseteq g(x)$?</strong></p> <p><strong>More broadly, am I right to notice a line between functions and sets? Can we always use the language of set theory to talk about functions?</strong></p>
[ { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" }, { "answer_id": 2103894, "score": 14, "text": "<p>There are two separate things: </p>\n\n<p>1) How we think about mathematical objects; and</p>\n\n<p>2) How we <em>formalize</em> mathematical objects within set theory. </p>\n\n<p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p>\n\n<p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p>\n\n<p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p>\n\n<p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p>\n\n<p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p>\n\n<p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p>\n\n<p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p>\n\n<p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p>\n\n<p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p>\n" }, { "answer_id": 2103848, "score": 12, "text": "<p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \\subset g$ </p>\n" } ]
Question: Can we always use the language of set theory to talk about functions? Answer 1: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 2: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 3: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 4: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 5: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 6: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 7: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 8: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 9: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 10: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 11: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 12: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 13: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 14: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 15: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 16: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 17: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 18: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 19: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 20: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 21: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 22: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 23: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 24: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 25: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 26: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 27: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 28: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 29: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 30: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 31: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 32: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 33: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 34: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 35: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 36: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 37: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 38: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p> Answer 39: <p>There are two separate things: </p> <p>1) How we think about mathematical objects; and</p> <p>2) How we <em>formalize</em> mathematical objects within set theory. </p> <p>People such as Archimedes, Eudoxus, Newton, Euler, and Gauss did brilliant math long before the development of set theory. Clearly set-theoretic formalization is not a prerequisite of doing math.</p> <p>The formalization of mathematical objects in the context of set theory is a relatively recent historical development, having taken place between say the late nineteenth century and much of the twentieth. During that period of time mathematicians found a need for more rigorous foundations, and set theory turned out to be highly useful for this purpose. It's not necessarily the last word on foundations; nor do we spend much time caring about the formalization most of the time.</p> <p>For example if we are in calculus class and we encounter the function defined by $f(x) = x^2$, we typically think of it as a machine or black box that inputs a real number and outputs the square of the input. Nobody thinks of it as a set of ordered pairs. </p> <p>On the other hand if we're learning analytic geometry for the first time, given $f(x) = x^2$ we make a table containing a few sample pairs $(x, x^2)$, then we plot those points in the $x$-$y$ plane, and we see that the dots seem to form a parabola. When we graph a function we are implicitly using the idea of a function as a set of ordered pairs!</p> <p>The best way to think about this is that we have two points of view, the intuition and the formalism. We go back and forth between them as needed. Sometimes a function is a machine or a mapping or a correspondence, and sometimes it's a set of ordered pairs. Whichever point of view helps us at any given moment. </p> <p>One can ask about the nature of the relationship between the intuitive idea of a function (or any other mathematical object) and the set-theoretic symbology that represents or models that object. This is a question of philosophy. </p> <p>Perhaps functions and numbers and sets exist in some abstract Platonic realm, and our symbols are accurate representations of them. Or perhaps our symbols are helpful but inaccurate representations. Or perhaps there are no functions or numbers or sets at all, just strings of symbols manipulated in accordance with formal rules. In that latter point of view, math is just a game like chess. Nobody thinks the laws of the universe are written in chess, but many people think the laws of the universe are written in math. Why are the rules of math so different than the rules of chess?</p> <p>Many smart people have thought deeply about these issues. But when they think about these things they are doing philosophy, not math. </p> <p>The takeaway is that when we do math, we use both intuition and formalism, whichever is most helpful at any given moment. And we don't think about the philosophy, unless we are doing philosophy.</p> Answer 40: <p>When you think about a function as a set, it is a set of ordered pairs. The first element of the pair belongs to the domain, the second to the range. When you refer to the function as a whole, it should be $f$ or $g$, not $f(x)$ which should be the value of the function at $x$. Seen as a set of ordered pairs, every pair that belongs to $f$ also belongs to $g$, so you can say $f \subset g$ </p>
If $abc=1$ so $\sum\limits_{cyc}\sqrt{\frac{a}{4a+2b+3}}\leq1$.
2,116,440
11
https://math.stackexchange.com/questions/2116440/if-abc-1-so-sum-limits-cyc-sqrt-fraca4a2b3-leq1
<p>Let $a$, $b$ and $c$ be positive numbers such that $abc=1$. Prove that: $$\sqrt{\frac{a}{4a+2b+3}}+\sqrt{\frac{b}{4b+2c+3}}+\sqrt{\frac{c}{4c+2a+3}}\leq1$$</p> <p>The equality "occurs" also for $a\rightarrow+\infty$, $b\rightarrow+\infty$ and $a&gt;&gt;b$.</p> <p>I tried AM-GM, C-S and more, but without any success. </p>
[ { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" }, { "answer_id": 3301620, "score": 2, "text": "<p>With the substitutions <span class=\"math-container\">$a = \\frac{x}{y}, \\ b = \\frac{y}{z}$</span> and <span class=\"math-container\">$c = \\frac{z}{x}$</span>, \nit suffices to prove that\n<span class=\"math-container\">$$\\sum_{\\mathrm{cyc}} \\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}} \\le 1.$$</span>\nIt suffices to prove that (The desired result follows by summing cyclically.)\n<span class=\"math-container\">$$\\sqrt{\\frac{zx}{4zx + 2y^2 + 3yz}}\n\\le \\frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span>\nSquaring both sides, it suffices to prove that <span class=\"math-container\">$f(x, y, z) \\ge 0$</span> where\n<span class=\"math-container\">\\begin{align}\nf(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\\nonumber\\\\\n&amp;\\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\\nonumber\\\\\n&amp;\\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5.\n\\end{align}</span>\nWe use the Buffalo Way. There are three possible cases:</p>\n\n<p>1) <span class=\"math-container\">$z = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$y = z + s, \\ x = z+ t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span>\nwhere\n<span class=\"math-container\">\\begin{align}\na_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\\\\na_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\\\\na_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\\\\na_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\\\\na_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4.\n\\end{align}</span>\nIt is easy to prove that <span class=\"math-container\">$a_4\\ge 0, \\ a_3\\ge 0, \\ a_2 \\ge 0, \\ a_0 \\ge 0$</span> and <span class=\"math-container\">$4a_2a_0 \\ge a_1^2$</span>. \nThus, <span class=\"math-container\">$f(z+t, z+s, z) \\ge 0$</span>.</p>\n\n<p>2) <span class=\"math-container\">$y = \\min(x,y,z)$</span>: Let <span class=\"math-container\">$z = y+s, \\ x = y+t; \\ s, t\\ge 0$</span>. We have\n<span class=\"math-container\">\\begin{align}\nf(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\\nonumber\\\\\n&amp;\\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\\nonumber\\\\\n&amp;\\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4.\n\\end{align}</span>\nClearly, <span class=\"math-container\">$f(y+t, y, y+s)\\ge 0$</span>.</p>\n\n<p>3) <span class=\"math-container\">$x = \\min(x,y,z)$</span>: Similar. </p>\n" } ]
Question: If $abc=1$ so $\sum\limits_{cyc}\sqrt{\frac{a}{4a+2b+3}}\leq1$. Answer 1: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 2: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 3: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 4: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 5: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 6: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 7: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 8: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 9: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 10: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 11: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 12: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 13: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 14: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 15: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 16: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 17: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 18: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 19: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p> Answer 20: <p>With the substitutions <span class="math-container">$a = \frac{x}{y}, \ b = \frac{y}{z}$</span> and <span class="math-container">$c = \frac{z}{x}$</span>, it suffices to prove that <span class="math-container">$$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$</span> It suffices to prove that (The desired result follows by summing cyclically.) <span class="math-container">$$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$</span> Squaring both sides, it suffices to prove that <span class="math-container">$f(x, y, z) \ge 0$</span> where <span class="math-container">\begin{align} f(x,y,z) &amp;= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &amp;\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &amp;\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align}</span> We use the Buffalo Way. There are three possible cases:</p> <p>1) <span class="math-container">$z = \min(x,y,z)$</span>: Let <span class="math-container">$y = z + s, \ x = z+ t; \ s, t\ge 0$</span>. We have <span class="math-container">$$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$</span> where <span class="math-container">\begin{align} a_4 &amp;= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &amp;= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &amp;= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &amp;= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &amp;= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align}</span> It is easy to prove that <span class="math-container">$a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$</span> and <span class="math-container">$4a_2a_0 \ge a_1^2$</span>. Thus, <span class="math-container">$f(z+t, z+s, z) \ge 0$</span>.</p> <p>2) <span class="math-container">$y = \min(x,y,z)$</span>: Let <span class="math-container">$z = y+s, \ x = y+t; \ s, t\ge 0$</span>. We have <span class="math-container">\begin{align} f(y+t, y, y+s) &amp;= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &amp;\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &amp;\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align}</span> Clearly, <span class="math-container">$f(y+t, y, y+s)\ge 0$</span>.</p> <p>3) <span class="math-container">$x = \min(x,y,z)$</span>: Similar. </p>
Semigroup analogue to the classification of finite simple groups?
2,116,908
11
https://math.stackexchange.com/questions/2116908/semigroup-analogue-to-the-classification-of-finite-simple-groups
<p>Is there a semigroup analogue to the classification of finite simple groups?</p> <p>If so what are some of the major results?</p> <p>--Edit, reference links---</p> <p><a href="https://en.wikipedia.org/wiki/Classification_of_finite_simple_groups" rel="noreferrer">Classification of Finite Simple Groups</a></p> <p><a href="https://en.wikipedia.org/wiki/Special_classes_of_semigroups" rel="noreferrer">Special Classes of Semigroups</a> </p> <p><a href="http://oeis.org/A239750" rel="noreferrer">One of my semigroup equation sequences in OEIS, g(f(x)) = f(f(f(x)))</a></p>
[ { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" }, { "answer_id": 2122925, "score": 8, "text": "<p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href=\"https://en.wikipedia.org/wiki/Group_extension#Extension_problem\" rel=\"noreferrer\">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p>\n\n<p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href=\"https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory\" rel=\"noreferrer\">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\\{1, a, b\\}$ in which $aa = ba = a$ and $ab = bb = b$.</p>\n\n<p>The best reference on this theory is the (advanced) book</p>\n\n<p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p>\n" }, { "answer_id": 2842737, "score": 7, "text": "<p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p>\n<p>There is a complete classification of finite congruence-free semigroups. They are either:</p>\n<ol>\n<li>a two-element semigroup</li>\n<li>a finite simple group</li>\n<li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li>\n</ol>\n<p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href=\"https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero\">this</a> Math Overflow answer. (A zero is just an element <span class=\"math-container\">$0$</span> such that <span class=\"math-container\">$0a = a0 = 0$</span> for all <span class=\"math-container\">$a$</span>.)</p>\n<p>I don't know of a great online source for the third case. <a href=\"https://arxiv.org/pdf/math/0205083.pdf\" rel=\"nofollow noreferrer\">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>\n" } ]
Question: Semigroup analogue to the classification of finite simple groups? Answer 1: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 2: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 3: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 4: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 5: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 6: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 7: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 8: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 9: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 10: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 11: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 12: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 13: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 14: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 15: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 16: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 17: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 18: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 19: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 20: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 21: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 22: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 23: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 24: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 25: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 26: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 27: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 28: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 29: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 30: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 31: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 32: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 33: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 34: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 35: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 36: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 37: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 38: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p> Answer 39: <p>In group theory, the classification of finite simple groups reduces the classification of all finite groups to the so-called <a href="https://en.wikipedia.org/wiki/Group_extension#Extension_problem" rel="noreferrer">extension problem</a>. Roughly speaking, the extension problem consists in describing a group in terms of a particular normal subgroup and quotient group.</p> <p>There is no semigroup analogue to this theory, but a weaker classification scheme exists. A semigroup $S$ <em>divides</em> a semigroup $T$ if $S$ is a homomorphic image of a subsemigroup of $T$. The <a href="https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory" rel="noreferrer">Krohn–Rhodes theorem</a> states that every finite semigroup $S$ divides a wreath product of finite simple groups, each dividing $S$, and copies of the 3-element monoid $\{1, a, b\}$ in which $aa = ba = a$ and $ab = bb = b$.</p> <p>The best reference on this theory is the (advanced) book</p> <p>[1] J. Rhodes, B. Steinberg. <em>The $q$-theory of finite semigroups</em>. Springer Verlag (2008). ISBN 978-0-387-09780-0.</p> Answer 40: <p>In semigroup theory, the term &quot;simple&quot; has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called &quot;congruence-free semigroups&quot;. (In universal algebra, &quot;simple&quot; means no non-trivial quotients.)</p> <p>There is a complete classification of finite congruence-free semigroups. They are either:</p> <ol> <li>a two-element semigroup</li> <li>a finite simple group</li> <li>a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices.</li> </ol> <p>The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in <a href="https://mathoverflow.net/questions/268000/finite-congruence-free-semigroup-without-zero">this</a> Math Overflow answer. (A zero is just an element <span class="math-container">$0$</span> such that <span class="math-container">$0a = a0 = 0$</span> for all <span class="math-container">$a$</span>.)</p> <p>I don't know of a great online source for the third case. <a href="https://arxiv.org/pdf/math/0205083.pdf" rel="nofollow noreferrer">This preprint</a> states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.</p>
How does a mathematician create a new zeta function?
2,079,389
10
https://math.stackexchange.com/questions/2079389/how-does-a-mathematician-create-a-new-zeta-function
<p>Imagine a mathematician wants to create this function:</p> <p>$$F(s)=\sum_{n=0}^{\infty} \frac{1}{(n^2+1)^s}$$</p> <p>Where would he start? </p> <p>More precisely, how would he know the exact values of $F(s)$ for $s&gt; 0$? And how would he extend this result for every natural $s$? And for real $s$?</p>
[ { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" }, { "answer_id": 2079454, "score": 13, "text": "<p>Update:</p>\n\n<p>If one may recall <a href=\"https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt\">the following series</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{n^2+x}=\\frac1{2x}+\\frac\\pi{2\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n\n<p>Then differentiating both sides $k$ times yields</p>\n\n<blockquote>\n <p>$$\\sum_{n=0}^\\infty\\frac1{(n^2+x)^{k+1}}=\\frac1{2x^{k+1}}+\\frac{(-1)^k\\pi}{2\\times k!}\\frac{d^k}{dx^k}\\frac1{\\sqrt x\\tanh(\\pi\\sqrt x)}$$</p>\n</blockquote>\n\n<p>and evaluation at $x=1$ gives closed forms.</p>\n\n<p>Update:</p>\n\n<p>Using <a href=\"https://en.wikipedia.org/wiki/General_Leibniz_rule\" rel=\"nofollow noreferrer\">general Leibniz rule</a>, we have</p>\n\n<p>$$\\frac{d^k}{dx^k}\\frac{\\coth(\\pi\\sqrt x)}{\\sqrt x}=\\sum_{p=0}^k\\binom kp\\frac{\\Gamma(3/2+n-p)}{-2\\sqrt\\pi}(-1)^{n-p}x^{-1/2-n+p}(\\coth(\\pi\\sqrt x))^{(p)}$$</p>\n\n<p>We can handle the chain rule with the $n$th derivative of $\\coth$ using <a href=\"https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula\" rel=\"nofollow noreferrer\">Faà di Bruno's formula</a>,</p>\n\n<p>$$\\small(\\coth(\\pi\\sqrt x))^{(p)}=\\sum_{q=1}^n\\coth^{(q)}(\\pi\\sqrt x)B_{p,q}\\left(\\pi\\frac12x^{-1/2},-\\pi\\frac14x^{-1/2},\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\\right)$$</p>\n\n<p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\\coth$ is then given when $q\\ge1$:</p>\n\n<p>$$\\coth^{(q)}(x)=2^q(\\coth(x)-1)\\sum_{r=0}^q\\frac{(-1)^rr!S_q^{(r)}}{2^r}(\\coth(x)+1)^r$$</p>\n\n<p>Putting all of this together,</p>\n\n<blockquote>\n <p>$$\\tiny F(k+1)=\\frac12+\\frac{(-1)^k\\pi}{2(k!)}\\left(\\binom kp\\frac{\\Gamma(3/2+k)}{-2\\sqrt\\pi}(-1)^k(\\coth(\\pi)+\\sum_{p=1}^k\\sum_{q=1}^n\\sum_{r=0}^q\\binom kp\\frac{\\Gamma(3/2+k-p)}{-2\\sqrt\\pi}(-1)^{k+p+r}2^{q-r}(\\coth(\\pi)-1)(-1)^rr!S_q^{(r)}(\\coth(\\pi)+1)^r(\\pi)B_{p,q}\\left(\\pi,-\\pi\\frac14,\\dots,\\sqrt\\pi\\frac{\\Gamma(3/2+q)}{-4}(-1)^q\\right)\\right)$$</p>\n</blockquote>\n\n<hr>\n\n<p>Old:</p>\n\n<p>A <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">theta function</a>:</p>\n\n<p>$$\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty r^{n^2}$$</p>\n\n<p>Term by term integration then reveals that</p>\n\n<p>$$\\frac1x\\int_0^x\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\sum_{n=0}^\\infty\\frac{x^{n^2}}{1+n^2}$$</p>\n\n<p>Repeat this process over and over to get</p>\n\n<p>$$\\frac1{x_1}\\int_0^{x_1}dx_2\\frac1{x_2}\\int_0^{x_2}dx_3\\dots dx_k\\frac1{x_k}\\int_0^{x_k}dr\\frac12[\\vartheta_3(0,r)+1]=\\sum_{n=0}^\\infty\\frac{x_1^{n^2}}{(1+n^2)^k}$$</p>\n\n<p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p>\n\n<p>I do note, however, that in the case of $k=1$, the solution is given <a href=\"https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq\">in this post</a>:</p>\n\n<p>$$\\sum_{n=0}^\\infty\\frac1{1+n^2}=\\int_0^1\\frac12[\\vartheta_3(0,r)+1]\\ dr=\\frac\\pi{2\\tanh\\pi}+\\frac12$$</p>\n" }, { "answer_id": 2079422, "score": 5, "text": "<p>It doesn't have an Euler product. </p>\n\n<p>Expanding $(1+n^{-2})^{-s} =\\sum_{k=0}^\\infty {-s \\choose k} n^{-2k}$ you get $$F(s) =\\sum_{n=0}^\\infty (n^2+1)^{-s} = 1+2^{-s}+\\sum_{k=0}^\\infty {-s \\choose k} (\\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \\frac{1}{2}-k, k \\in \\mathbb{N}$</p>\n\n<p>Using $\\Gamma(s) a^{-s} =\\int_0^\\infty x^{s-1}e^{-ax}dx$ and $\\theta(x) = \\sum_{n=1}^\\infty e^{-x n^2}$ you have\n$$G(s) = (F(s)-1-2^{-s}) \\Gamma(s) = \\int_0^\\infty x^{s-1}e^{-x} \\theta(x)dx, \\qquad \\Gamma(s) \\zeta(2s) = \\int_0^\\infty x^{s-1}\\theta(x) dx $$\nwhere $\\Gamma(s) \\zeta(2s)-\\frac{\\sqrt{\\pi}}{2(s-1/2)}+\\frac{1}{2s}$ is entire.</p>\n\n<p>From these poles location, we <a href=\"https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B\" rel=\"nofollow noreferrer\">can deduce</a> that for arbitrary large $N$ : </p>\n\n<p>$\\theta(x) = \\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2}+o(x^N)$ as $x \\to 0$ and hence $e^{-x}\\theta(x) = \\sum_{k\\ge 0} \\frac{x^k}{k!}(\\frac{\\sqrt{\\pi}}{2}x^{-1/2}-\\frac{1}{2})+o(x^N)$ as $x \\to 0$</p>\n\n<p>so that\n$$\\lim_{s \\to 1/2-k}(s+1/2-k)G(s)= \\frac{\\sqrt{\\pi}}{2k!}, \\qquad \\lim_{s \\to -k}(s-k)G(s)= \\frac{-1/2}{(k+1)!}$$</p>\n" }, { "answer_id": 2079465, "score": 5, "text": "<p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing \"exact values\" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p>\n" }, { "answer_id": 2079435, "score": 3, "text": "<p>Perhaps look into the theory of <a href=\"https://en.wikipedia.org/wiki/Dirichlet_series\" rel=\"nofollow noreferrer\">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \\mapsto [\\exists x\\in \\mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p>\n\n<p>$$0,1,1,0,0,1\\ldots$$</p>\n" } ]
Question: How does a mathematician create a new zeta function? Answer 1: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 2: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 3: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 4: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 5: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 6: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 7: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 8: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 9: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 10: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 11: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 12: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 13: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 14: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 15: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 16: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 17: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 18: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 19: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 20: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 21: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 22: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 23: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 24: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 25: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 26: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 27: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 28: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 29: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 30: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 31: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 32: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 33: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 34: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 35: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 36: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 37: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 38: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 39: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 40: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 41: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 42: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 43: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 44: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 45: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 46: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 47: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 48: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 49: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 50: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 51: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 52: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 53: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 54: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 55: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 56: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 57: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 58: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 59: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 60: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 61: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 62: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 63: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 64: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 65: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 66: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 67: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 68: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 69: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 70: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 71: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 72: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 73: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 74: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 75: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 76: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p> Answer 77: <p>Update:</p> <p>If one may recall <a href="https://math.stackexchange.com/questions/2079487/how-to-evaluate-int-0x-vartheta-30-t-dt">the following series</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{n^2+x}=\frac1{2x}+\frac\pi{2\sqrt x\tanh(\pi\sqrt x)}$$</p> <p>Then differentiating both sides $k$ times yields</p> <blockquote> <p>$$\sum_{n=0}^\infty\frac1{(n^2+x)^{k+1}}=\frac1{2x^{k+1}}+\frac{(-1)^k\pi}{2\times k!}\frac{d^k}{dx^k}\frac1{\sqrt x\tanh(\pi\sqrt x)}$$</p> </blockquote> <p>and evaluation at $x=1$ gives closed forms.</p> <p>Update:</p> <p>Using <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">general Leibniz rule</a>, we have</p> <p>$$\frac{d^k}{dx^k}\frac{\coth(\pi\sqrt x)}{\sqrt x}=\sum_{p=0}^k\binom kp\frac{\Gamma(3/2+n-p)}{-2\sqrt\pi}(-1)^{n-p}x^{-1/2-n+p}(\coth(\pi\sqrt x))^{(p)}$$</p> <p>We can handle the chain rule with the $n$th derivative of $\coth$ using <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno&#39;s_formula" rel="nofollow noreferrer">Faà di Bruno's formula</a>,</p> <p>$$\small(\coth(\pi\sqrt x))^{(p)}=\sum_{q=1}^n\coth^{(q)}(\pi\sqrt x)B_{p,q}\left(\pi\frac12x^{-1/2},-\pi\frac14x^{-1/2},\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^qx^{-1/2-q}\right)$$</p> <p>where $B_{n,k}$ is the bell polynomial. The $q$th derivative of $\coth$ is then given when $q\ge1$:</p> <p>$$\coth^{(q)}(x)=2^q(\coth(x)-1)\sum_{r=0}^q\frac{(-1)^rr!S_q^{(r)}}{2^r}(\coth(x)+1)^r$$</p> <p>Putting all of this together,</p> <blockquote> <p>$$\tiny F(k+1)=\frac12+\frac{(-1)^k\pi}{2(k!)}\left(\binom kp\frac{\Gamma(3/2+k)}{-2\sqrt\pi}(-1)^k(\coth(\pi)+\sum_{p=1}^k\sum_{q=1}^n\sum_{r=0}^q\binom kp\frac{\Gamma(3/2+k-p)}{-2\sqrt\pi}(-1)^{k+p+r}2^{q-r}(\coth(\pi)-1)(-1)^rr!S_q^{(r)}(\coth(\pi)+1)^r(\pi)B_{p,q}\left(\pi,-\pi\frac14,\dots,\sqrt\pi\frac{\Gamma(3/2+q)}{-4}(-1)^q\right)\right)$$</p> </blockquote> <hr> <p>Old:</p> <p>A <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">theta function</a>:</p> <p>$$\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty r^{n^2}$$</p> <p>Term by term integration then reveals that</p> <p>$$\frac1x\int_0^x\frac12[\vartheta_3(0,r)+1]\ dr=\sum_{n=0}^\infty\frac{x^{n^2}}{1+n^2}$$</p> <p>Repeat this process over and over to get</p> <p>$$\frac1{x_1}\int_0^{x_1}dx_2\frac1{x_2}\int_0^{x_2}dx_3\dots dx_k\frac1{x_k}\int_0^{x_k}dr\frac12[\vartheta_3(0,r)+1]=\sum_{n=0}^\infty\frac{x_1^{n^2}}{(1+n^2)^k}$$</p> <p>And of course, evaluate at $x_1=1$ for your function. I do not see a closed form coming out of this, but it may be useful for discerning certain things about your function.</p> <p>I do note, however, that in the case of $k=1$, the solution is given <a href="https://math.stackexchange.com/questions/1865615/curious-integrals-for-jacobi-theta-functions-int-01-vartheta-n0-qdq">in this post</a>:</p> <p>$$\sum_{n=0}^\infty\frac1{1+n^2}=\int_0^1\frac12[\vartheta_3(0,r)+1]\ dr=\frac\pi{2\tanh\pi}+\frac12$$</p> Answer 78: <p>It doesn't have an Euler product. </p> <p>Expanding $(1+n^{-2})^{-s} =\sum_{k=0}^\infty {-s \choose k} n^{-2k}$ you get $$F(s) =\sum_{n=0}^\infty (n^2+1)^{-s} = 1+2^{-s}+\sum_{k=0}^\infty {-s \choose k} (\zeta(2s+2k)-1)$$ so it is meromorphic on the whole complex plane, with poles at $s = \frac{1}{2}-k, k \in \mathbb{N}$</p> <p>Using $\Gamma(s) a^{-s} =\int_0^\infty x^{s-1}e^{-ax}dx$ and $\theta(x) = \sum_{n=1}^\infty e^{-x n^2}$ you have $$G(s) = (F(s)-1-2^{-s}) \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \theta(x)dx, \qquad \Gamma(s) \zeta(2s) = \int_0^\infty x^{s-1}\theta(x) dx $$ where $\Gamma(s) \zeta(2s)-\frac{\sqrt{\pi}}{2(s-1/2)}+\frac{1}{2s}$ is entire.</p> <p>From these poles location, we <a href="https://fr.wikipedia.org/wiki/Fonction_z%C3%AAta_de_Riemann#Par_une_int.C3.A9grale_sur_.E2.84.9D.2B" rel="nofollow noreferrer">can deduce</a> that for arbitrary large $N$ : </p> <p>$\theta(x) = \frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2}+o(x^N)$ as $x \to 0$ and hence $e^{-x}\theta(x) = \sum_{k\ge 0} \frac{x^k}{k!}(\frac{\sqrt{\pi}}{2}x^{-1/2}-\frac{1}{2})+o(x^N)$ as $x \to 0$</p> <p>so that $$\lim_{s \to 1/2-k}(s+1/2-k)G(s)= \frac{\sqrt{\pi}}{2k!}, \qquad \lim_{s \to -k}(s-k)G(s)= \frac{-1/2}{(k+1)!}$$</p> Answer 79: <p>Accepting, for argument's sake, that it is possible to create a function (the opposing view is that one discovers mathematical objects, rather than creating them), you have created the function simply by writing down that formula. Well, you have created it for real part of $s$ exceeding one-half, which is where the series converges. Knowing "exact values" (again, a term we can argue over, but I'll take it to mean a finite expression in terms of well-known constants, and leave it at that), well, that's generally very difficult. No one knows an exact value for $\zeta(3)$, for example, and for your function I suspect no one knows an exact value for $F(2)$. And as to extending your function to be defined for all complex $s$ (outside of a pole at $s=1/2$), that's what <em>analytic continuation</em> is for, so now you have a keyphrase to search for. </p> Answer 80: <p>Perhaps look into the theory of <a href="https://en.wikipedia.org/wiki/Dirichlet_series" rel="nofollow noreferrer">Dirichlet series</a>? Your function is the series corresponding to the sequence $n \mapsto [\exists x\in \mathbb{N}(n=x^2+1)],$ where $[-]$ is the Iverson Bracket (google it.) The first few terms of the sequence are:</p> <p>$$0,1,1,0,0,1\ldots$$</p>
Show a detail prove of : $\int_{0}^{1}\int_{0}^{1}\left({x\over 1-xy}\cdot{\ln{x}-\ln{y}\over \ln{x}+\ln{y}}\right)\mathrm dx\mathrm dy=1-2\gamma$
2,081,750
10
https://math.stackexchange.com/questions/2081750/show-a-detail-prove-of-int-01-int-01-leftx-over-1-xy-cdot-lnx
<p>Variation of my recent <a href="https://math.stackexchange.com/questions/2081533/help-to-prove-that-int-01-int-011-x-over-1-xy-cdotx-over-lnx">post</a>. Strangely it leads to the result in term of Euler's constant;$\gamma$</p> <p>Prove that </p> <p>$$\int_{0}^{1}\int_{0}^{1}\left({x\over 1-xy}\cdot{\ln{x}-\ln{y}\over \ln{x}+\ln{y}}\right)\mathrm dx\mathrm dy=1-2\gamma$$</p> <p>Where $\gamma=0.577215...$;Euler's constant.</p> <p>My try:</p> <p>$u=xy$ then $du=ydx$</p> <p>$$\int_{0}^{1}\int_{0}^{y}{{u\over y^2}\over 1-u}\cdot{\ln{u\over y^2}\over \ln{u}}\mathrm du\mathrm dy$$</p> <p>I can't understand how to or even glimsely to solve double integrals. </p> <p>Any chance can someone show me a detail of the answer. Thank you.</p>
[ { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2098114, "score": 10, "text": "<p>Folllowing @tired Comment(s):\n$$ \n\\begin{align} \n\\text{Let} &amp;\\colon\\space x\\,y=u,\\,\\,y=v \\space\\Rightarrow\\space x/y=u/v^2,\\,\\,y\\,dx=du,\\,\\,dy=dv \\\\[3mm] \n\\color{red}{I} &amp;= \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{x}{1-x\\,y}\\,\\frac{\\log{x}-\\log{y}}{\\log{x}+\\log{y}}\\right)\\,dx\\,dy = \\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{(x/y)}{1-(x\\,y)}\\,\\frac{\\log{(x/y)}}{\\log{(x\\,y)}}\\right)\\,(y\\,dx)\\,dy \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{0}^{v}\\left(\\frac{(u/v^2)}{1-u}\\,\\frac{\\log{(u/v^2)}}{\\log{u}}\\right)\\,du\\,dv = \\int_{0}^{1}\\int_{\\color{red}{0}}^{\\color{red}{v}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,du\\,dv} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\int_{\\color{red}{u}}^{\\color{red}{1}}\\left[\\frac{u}{1-u}\\,\\frac{1}{v^2}\\left(1-2\\frac{\\log{v}}{\\log{u}}\\right)\\right]\\color{red}{\\,dv\\,du} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{1\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\int_{u}^{1}\\left(\\frac{1}{v^2}-\\frac{2}{\\log{u}}\\frac{\\log{v}}{v^2}\\right)\\,dv \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\,du\\,\\left[-\\frac{1}{v}+\\frac{2}{\\log{u}}\\frac{1+\\log{v}}{v}\\right]_{u}^{1} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{2\\}} \\\\[3mm] \n&amp;= \\int_{0}^{1}\\frac{u}{1-u}\\left(\\frac{-2}{\\log{u}}\\frac{1-u}{u}-\\frac{1+u}{u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u}{1-u}\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}-\\frac{1+u\\color{red}{-1+1}}{1-u}\\right)\\,du = \\int_{0}^{1}\\left(\\frac{-2}{\\log{u}}+\\frac{-2}{1-u}+1\\right)\\,du \\\\[3mm] \n&amp;= \\int_{0}^{1}\\,du\\,-2\\int_{0}^{1}\\left(\\frac{1}{\\log{u}}+\\frac{1}{1-u}\\right)\\,du = \\color{red}{1 - 2\\,\\gamma} \\quad\\qquad\\qquad\\qquad\\qquad\\color{blue}{\\{3\\}} \\\\[3mm] \n\\end{align} \n$$ \n <br/><br>\n$\\,\\color{blue}{\\{1\\}}\\colon\\,$ <a href=\"http://mathinsight.org/double_integral_change_order_integration_examples\">Change order integration</a>. <br/> \n$\\,\\color{blue}{\\{2\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Integration_by_parts\">IBP:</a> $\\,\\int\\log{x}\\,\\frac{dx}{x^2},\\,\\{u=\\log{x}\\,{\\small\\Rightarrow}\\,du=\\frac{dx}{x},\\, dv=\\frac{dx}{x^2}\\,{\\small\\Rightarrow}\\,v=\\frac{-1}{x}\\},\\,=-\\frac{\\log{x}}{x}+\\int\\frac{dx}{x^2}=-\\frac{1+\\log{x}}{x}$. <br/> \n$\\,\\color{blue}{\\{3\\}}\\colon\\,$ <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals\">Integral Representation of Euler–Mascheroni Constant:</a> $\\quad\\gamma = \\int_{0}^{1}\\left(\\frac{1}{\\log{x}}+\\frac{1}{1-x}\\right)\\,dx$. <br/> </p>\n" }, { "answer_id": 2153890, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p>$\\ds{\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y =\n1 - 2\\gamma\\,,\\qquad\\gamma:\\ Euler\\mbox{-}Mascheroni\\ Constant}$.</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\\,\n{\\ln\\pars{x} - \\ln\\pars{y} \\over \\ln\\pars{x} + \\ln\\pars{y}}\\,\\dd x\\,\\dd y\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{1}\\int_{0}^{1}{x \\over 1 - xy}\n\\pars{x^{\\mu} - y^{\\mu}}\n\\bracks{-\\int_{0}^{\\infty}\\pars{xy}^{\\,t}\\,\\dd t}\\,\\dd x\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{1}\n{x^{t + 1}y^{t + \\mu} - x^{t + \\mu + 1}y^{t} \\over 1 - xy}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{0}^{y}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd x\\,\\dd y\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\\int_{x}^{1}\n{x^{t + 1}y^{\\mu - 2} - x^{t + \\mu + 1}y^{-\\mu - 2} \\over 1 - x}\n\\,\\dd y\\,\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{x^{t + 1}\\,{1 - x^{\\mu - 1} \\over \\mu - 1} +\nx^{t + \\mu + 1}\\,{1 - x^{-\\mu - 1}\\over \\mu + 1}}\n{\\dd x\\,\\dd t \\over 1 - x}\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\\int_{0}^{1}\n\\pars{{1 \\over \\mu - 1}\\,{x^{t + 1} - x^{t + \\mu} \\over 1 - x} +\n{1 \\over \\mu + 1}\\,{x^{t + \\mu + 1} - x^{t} \\over 1 - x}}\n\\dd x\\,\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\int_{0}^{\\infty}\n\\bracks{{\\Psi\\pars{t + \\mu + 1} - \\Psi\\pars{t + 2} \\over \\mu - 1} +\n{\\Psi\\pars{t + 1} - \\Psi\\pars{t + \\mu + 2} \\over \\mu + 1}}\n\\dd t\\,\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\left.\\partiald{}{\\mu}\\bracks{%\n{1 \\over \\mu - 1}\\,\\ln\\pars{\\Gamma\\pars{t + \\mu + 1} \\over \\Gamma\\pars{t + 2}} +\n{1 \\over \\mu + 1}\\,\\ln\\pars{\\Gamma\\pars{t + 1} \\over \\Gamma\\pars{t + \\mu + 2}}}\n_{\\ t\\ =\\ 0}^{\\ t\\ \\to\\ \\infty}\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0}\n\\\\[5mm] = &amp;\\\n\\partiald{}{\\mu}\\bracks{%\n-\\,{\\ln\\pars{\\Gamma\\pars{\\mu + 1}} \\over \\mu - 1} +\n{\\ln\\pars{\\Gamma\\pars{\\mu + 2}} \\over \\mu + 1}}_{\\ \\mu\\ =\\ 0} =\n\\Psi\\pars{1} + \\Psi\\pars{2} = 2\\Psi\\pars{1} + 1 =\n\\bbx{\\ds{1 - 2\\gamma}}\n\\end{align}</p>\n" } ]
Question: Show a detail prove of : $\int_{0}^{1}\int_{0}^{1}\left({x\over 1-xy}\cdot{\ln{x}-\ln{y}\over \ln{x}+\ln{y}}\right)\mathrm dx\mathrm dy=1-2\gamma$ Answer 1: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 2: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 3: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 4: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 5: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 6: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 7: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 8: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 9: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 10: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 11: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 12: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 13: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 14: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 15: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 16: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 17: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 18: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 19: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 20: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 21: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 22: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 23: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 24: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 25: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 26: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 27: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 28: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 29: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 30: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 31: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 32: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 33: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 34: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 35: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 36: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 37: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 38: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p> Answer 39: <p>Folllowing @tired Comment(s): $$ \begin{align} \text{Let} &amp;\colon\space x\,y=u,\,\,y=v \space\Rightarrow\space x/y=u/v^2,\,\,y\,dx=du,\,\,dy=dv \\[3mm] \color{red}{I} &amp;= \int_{0}^{1}\int_{0}^{1}\left(\frac{x}{1-x\,y}\,\frac{\log{x}-\log{y}}{\log{x}+\log{y}}\right)\,dx\,dy = \int_{0}^{1}\int_{0}^{1}\left(\frac{(x/y)}{1-(x\,y)}\,\frac{\log{(x/y)}}{\log{(x\,y)}}\right)\,(y\,dx)\,dy \\[3mm] &amp;= \int_{0}^{1}\int_{0}^{v}\left(\frac{(u/v^2)}{1-u}\,\frac{\log{(u/v^2)}}{\log{u}}\right)\,du\,dv = \int_{0}^{1}\int_{\color{red}{0}}^{\color{red}{v}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,du\,dv} \\[3mm] &amp;= \int_{0}^{1}\int_{\color{red}{u}}^{\color{red}{1}}\left[\frac{u}{1-u}\,\frac{1}{v^2}\left(1-2\frac{\log{v}}{\log{u}}\right)\right]\color{red}{\,dv\,du} \quad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{1\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\int_{u}^{1}\left(\frac{1}{v^2}-\frac{2}{\log{u}}\frac{\log{v}}{v^2}\right)\,dv \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\,du\,\left[-\frac{1}{v}+\frac{2}{\log{u}}\frac{1+\log{v}}{v}\right]_{u}^{1} \qquad\qquad\qquad\qquad\qquad\qquad\color{blue}{\{2\}} \\[3mm] &amp;= \int_{0}^{1}\frac{u}{1-u}\left(\frac{-2}{\log{u}}\frac{1-u}{u}-\frac{1+u}{u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u}{1-u}\right)\,du \\[3mm] &amp;= \int_{0}^{1}\left(\frac{-2}{\log{u}}-\frac{1+u\color{red}{-1+1}}{1-u}\right)\,du = \int_{0}^{1}\left(\frac{-2}{\log{u}}+\frac{-2}{1-u}+1\right)\,du \\[3mm] &amp;= \int_{0}^{1}\,du\,-2\int_{0}^{1}\left(\frac{1}{\log{u}}+\frac{1}{1-u}\right)\,du = \color{red}{1 - 2\,\gamma} \quad\qquad\qquad\qquad\qquad\color{blue}{\{3\}} \\[3mm] \end{align} $$ <br/><br> $\,\color{blue}{\{1\}}\colon\,$ <a href="http://mathinsight.org/double_integral_change_order_integration_examples">Change order integration</a>. <br/> $\,\color{blue}{\{2\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Integration_by_parts">IBP:</a> $\,\int\log{x}\,\frac{dx}{x^2},\,\{u=\log{x}\,{\small\Rightarrow}\,du=\frac{dx}{x},\, dv=\frac{dx}{x^2}\,{\small\Rightarrow}\,v=\frac{-1}{x}\},\,=-\frac{\log{x}}{x}+\int\frac{dx}{x^2}=-\frac{1+\log{x}}{x}$. <br/> $\,\color{blue}{\{3\}}\colon\,$ <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals">Integral Representation of Euler–Mascheroni Constant:</a> $\quad\gamma = \int_{0}^{1}\left(\frac{1}{\log{x}}+\frac{1}{1-x}\right)\,dx$. <br/> </p> Answer 40: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y = 1 - 2\gamma\,,\qquad\gamma:\ Euler\mbox{-}Mascheroni\ Constant}$.</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy}\, {\ln\pars{x} - \ln\pars{y} \over \ln\pars{x} + \ln\pars{y}}\,\dd x\,\dd y \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{1}\int_{0}^{1}{x \over 1 - xy} \pars{x^{\mu} - y^{\mu}} \bracks{-\int_{0}^{\infty}\pars{xy}^{\,t}\,\dd t}\,\dd x\,\dd y\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{1} {x^{t + 1}y^{t + \mu} - x^{t + \mu + 1}y^{t} \over 1 - xy} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{0}^{y} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd x\,\dd y\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1}\int_{x}^{1} {x^{t + 1}y^{\mu - 2} - x^{t + \mu + 1}y^{-\mu - 2} \over 1 - x} \,\dd y\,\dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{x^{t + 1}\,{1 - x^{\mu - 1} \over \mu - 1} + x^{t + \mu + 1}\,{1 - x^{-\mu - 1}\over \mu + 1}} {\dd x\,\dd t \over 1 - x}\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty}\int_{0}^{1} \pars{{1 \over \mu - 1}\,{x^{t + 1} - x^{t + \mu} \over 1 - x} + {1 \over \mu + 1}\,{x^{t + \mu + 1} - x^{t} \over 1 - x}} \dd x\,\dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\int_{0}^{\infty} \bracks{{\Psi\pars{t + \mu + 1} - \Psi\pars{t + 2} \over \mu - 1} + {\Psi\pars{t + 1} - \Psi\pars{t + \mu + 2} \over \mu + 1}} \dd t\,\right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \left.\partiald{}{\mu}\bracks{% {1 \over \mu - 1}\,\ln\pars{\Gamma\pars{t + \mu + 1} \over \Gamma\pars{t + 2}} + {1 \over \mu + 1}\,\ln\pars{\Gamma\pars{t + 1} \over \Gamma\pars{t + \mu + 2}}} _{\ t\ =\ 0}^{\ t\ \to\ \infty}\, \right\vert_{\ \mu\ =\ 0} \\[5mm] = &amp;\ \partiald{}{\mu}\bracks{% -\,{\ln\pars{\Gamma\pars{\mu + 1}} \over \mu - 1} + {\ln\pars{\Gamma\pars{\mu + 2}} \over \mu + 1}}_{\ \mu\ =\ 0} = \Psi\pars{1} + \Psi\pars{2} = 2\Psi\pars{1} + 1 = \bbx{\ds{1 - 2\gamma}} \end{align}</p>
Let $A$ be a $2 \times 2$ matrix such that $A^2=I$. Find trace and determinant
2,082,303
10
https://math.stackexchange.com/questions/2082303/let-a-be-a-2-times-2-matrix-such-that-a2-i-find-trace-and-determinant
<blockquote> <p>Suppose <span class="math-container">$A$</span> is a <span class="math-container">$2 \times 2$</span> matrix, <span class="math-container">$A^2=I$</span>. If <span class="math-container">$A \ne I,-I$</span> , find <span class="math-container">$\mbox{tr}(A)$</span> and <span class="math-container">$\det(A)$</span>.</p> </blockquote> <hr> <p>My solution:</p> <p>There are <span class="math-container">$3$</span> cases for the eigenvalues of A.</p> <p>case(i) <span class="math-container">$1,1$</span>.</p> <p>case(ii) <span class="math-container">$-1,-1$</span>.</p> <p>case(iii) <span class="math-container">$1,-1$</span>.</p> <p>I guess only the case(iii) holds. </p> <p>But I have no idea whether case(i) and (ii) hold.</p> <p>I know that <span class="math-container">$tr(A)$</span> is the sum of the sum of eigenvalues and <span class="math-container">$det(A)$</span> is the product of eigenvalues.</p> <p>So the answer is <span class="math-container">$tr(A)=0$</span> , <span class="math-container">$det(A)=-1$</span> ?</p> <p>Thanks.</p>
[ { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" }, { "answer_id": 2082321, "score": 9, "text": "<p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p>\n\n<ul>\n<li><p>$X-1$, and then $A=I$.</p></li>\n<li><p>$X+1$, and then $A=-I$.</p></li>\n<li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li>\n</ul>\n\n<p>By hypothesis, the only possibility is the last one.</p>\n\n<p>Since the characteristic polynomial of a $2\\times2$ matrix is $X^2-\\operatorname{tr}(A)X+\\det(A)$, we conclude that $\\operatorname {tr}(A)=0$ and $\\det(A)=-1$.</p>\n\n<hr>\n\n<p>We can avoid talking about minimal polynomials as follows.</p>\n\n<p>From $A^2=I$ and $A^2-\\operatorname{tr}(A)A+\\det(A)I=0$ we get $\\operatorname{tr}(A)A=(1+\\det(A))I$. If $\\operatorname{tr}(A)\\ne0$, then $A=\\lambda I$ and so $\\lambda=\\pm1$, which is not allowed. Therefore, $\\operatorname {tr}(A)=0$ and so $\\det(A)=-1$.</p>\n" }, { "answer_id": 2082316, "score": 6, "text": "<p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p>\n\n<p>We indeed know that the eigenvalues must be $\\pm 1$. Since $A \\notin \\{I,-I\\}$, there are only $3$ possible Jordan normal forms $J$:\n$$\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 1\\\\\n0 &amp; -1\\end{pmatrix}, \\begin{pmatrix}-1 &amp; 0\\\\\n0 &amp; 1\\end{pmatrix}$$\nHowever, the square of the first matrix is\n$$ \\begin{pmatrix}1 &amp; 2\\\\\n0 &amp; 1\\end{pmatrix}$$\nand the square of the second matrix is \n$$ \\begin{pmatrix}1 &amp; -2\\\\\n0 &amp; 1\\end{pmatrix}$$\nSo the only Jordan normal form that works is the third one, which has $\\det(A) = -1$ and $\\operatorname{tr}(A) = 0$.</p>\n" }, { "answer_id": 2082328, "score": 4, "text": "<p>Since $A$ is a \"small\" matrix, I find easier a brute-force approach than using big theorems:</p>\n\n<p>Let\n$$A=\\begin{pmatrix}a&amp;b\\\\c&amp;d\\end{pmatrix}$$\nThen\n$$A^2=\\begin{pmatrix}a^2+bc&amp;ab+bd\\\\ac+cd&amp;bc+d^2\\end{pmatrix}=\\begin{pmatrix}1&amp;0\\\\0&amp;1\\end{pmatrix}$$</p>\n\n<p>Suppose that $a+d=\\text{Tr}(A)\\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\\text{Tr}(A)=0$, that is, $a=-d$.</p>\n\n<p>Now\n$$\\det(A)=ad-bc=-a^2-bc=-1$$</p>\n" }, { "answer_id": 2082308, "score": 2, "text": "<p>For diagonalisable $A$, you always have\n$$\nA=UDU^{-1}\n$$\nfor a diagonal matrix $D$.</p>\n\n<p>If $\\lambda_1=\\lambda_2=1$, then\n$$\nD=1\n$$\nwhich would imply $A=1$.</p>\n\n<p>By a similar line of reasoning, if $\\lambda_1=\\lambda_2=-1$, then\n$$\nD=-1\n$$\nwhich would imply $A=-1$.</p>\n\n<p>As you know that $1\\neq A\\neq -1$, the rest of the solution in the OP follows.</p>\n" }, { "answer_id": 2082737, "score": 2, "text": "<p>Because $A \\ne I$ and $A \\ne -I$, then there are vectors $x_1$, $x_2$ such that\n$$\n y_1= (A-I)x_1 \\ne 0 \\\\\n y_2 = (A+I)x_2 \\ne 0.\n$$\nBecause $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\\pm 1$. So $A$ is similar to $\\left[\\begin{array}{cc} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{array}\\right]$, which gives you $\\mbox{tr}(A)=0$, $\\mbox{det}(A)=-1$.</p>\n" }, { "answer_id": 2082758, "score": 1, "text": "<p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0\\Leftrightarrow I_{2}-tr(A)\\cdot A+det(A)\\cdot I_{2}=0 (*) $.\n$ A^{2}=I_{2}\\Leftrightarrow det(A^{2})=det(I_{2})\\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\\pm 1 $.\n<br/>\nSuppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\\cdot A=2I_{2}\\Leftrightarrow tr(tr(A)\\cdot A)=tr(2I_{2})\\Leftrightarrow tr(A)\\cdot tr(A)=2tr(I_{2})\\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\\pm2 $. <br/>\nIf $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\\Leftrightarrow A=I_{2} $ , contradiction. <br/>\nIf $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\\Leftrightarrow A=-I_{2} $ , contradiction.<br/>\nSo $ det(A)=-1 $ which implies $ -tr(A)\\cdot A=0 \\Leftrightarrow tr(A)=0 $.</p>\n" } ]
Question: Let $A$ be a $2 \times 2$ matrix such that $A^2=I$. Find trace and determinant Answer 1: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 2: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 3: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 4: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 5: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 6: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 7: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 8: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 9: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 10: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 11: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 12: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 13: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 14: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 15: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 16: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 17: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 18: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 19: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 20: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 21: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 22: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 23: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 24: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 25: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 26: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 27: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 28: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 29: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 30: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 31: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 32: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 33: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 34: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 35: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 36: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 37: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 38: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 39: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 40: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 41: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 42: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 43: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 44: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 45: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 46: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 47: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 48: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 49: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 50: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 51: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 52: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 53: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 54: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 55: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 56: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 57: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 58: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 59: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 60: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 61: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 62: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 63: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 64: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 65: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 66: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 67: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 68: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 69: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 70: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 71: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 72: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 73: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 74: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 75: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 76: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 77: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 78: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 79: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 80: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 81: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 82: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 83: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 84: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 85: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 86: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 87: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 88: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 89: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 90: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 91: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 92: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 93: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 94: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 95: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 96: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 97: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 98: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 99: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 100: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 101: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 102: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 103: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 104: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 105: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 106: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 107: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 108: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p> Answer 109: <p>The minimal polynomial of $A$ is a factor of $X^2-1$ and so there are three possibilities:</p> <ul> <li><p>$X-1$, and then $A=I$.</p></li> <li><p>$X+1$, and then $A=-I$.</p></li> <li><p>$X^2-1$, and then it coincides with the characteristic polynomial.</p></li> </ul> <p>By hypothesis, the only possibility is the last one.</p> <p>Since the characteristic polynomial of a $2\times2$ matrix is $X^2-\operatorname{tr}(A)X+\det(A)$, we conclude that $\operatorname {tr}(A)=0$ and $\det(A)=-1$.</p> <hr> <p>We can avoid talking about minimal polynomials as follows.</p> <p>From $A^2=I$ and $A^2-\operatorname{tr}(A)A+\det(A)I=0$ we get $\operatorname{tr}(A)A=(1+\det(A))I$. If $\operatorname{tr}(A)\ne0$, then $A=\lambda I$ and so $\lambda=\pm1$, which is not allowed. Therefore, $\operatorname {tr}(A)=0$ and so $\det(A)=-1$.</p> Answer 110: <p>Let $J$ be the Jordan normal form of your matrix, so there exists a matrix $P$ such that $PAP^{-1} = J$. Then $J^2 = PA^2P^{-1} = PP^{-1} = I$. Since the Jordan normal form retains the trace and determinant, it suffices to consider these.</p> <p>We indeed know that the eigenvalues must be $\pm 1$. Since $A \notin \{I,-I\}$, there are only $3$ possible Jordan normal forms $J$: $$\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}, \begin{pmatrix}-1 &amp; 1\\ 0 &amp; -1\end{pmatrix}, \begin{pmatrix}-1 &amp; 0\\ 0 &amp; 1\end{pmatrix}$$ However, the square of the first matrix is $$ \begin{pmatrix}1 &amp; 2\\ 0 &amp; 1\end{pmatrix}$$ and the square of the second matrix is $$ \begin{pmatrix}1 &amp; -2\\ 0 &amp; 1\end{pmatrix}$$ So the only Jordan normal form that works is the third one, which has $\det(A) = -1$ and $\operatorname{tr}(A) = 0$.</p> Answer 111: <p>Since $A$ is a "small" matrix, I find easier a brute-force approach than using big theorems:</p> <p>Let $$A=\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$$ Then $$A^2=\begin{pmatrix}a^2+bc&amp;ab+bd\\ac+cd&amp;bc+d^2\end{pmatrix}=\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$$</p> <p>Suppose that $a+d=\text{Tr}(A)\neq 0$. Then, from cells $(1,2)$ and $(2,1)$ we get $b=c=0$. Now, from the main diagonal we get $a^2=d^2=1$, and since $A$ is not $I$ nor $-I$, we get that $ad=-1$, a contradiction. So $\text{Tr}(A)=0$, that is, $a=-d$.</p> <p>Now $$\det(A)=ad-bc=-a^2-bc=-1$$</p> Answer 112: <p>For diagonalisable $A$, you always have $$ A=UDU^{-1} $$ for a diagonal matrix $D$.</p> <p>If $\lambda_1=\lambda_2=1$, then $$ D=1 $$ which would imply $A=1$.</p> <p>By a similar line of reasoning, if $\lambda_1=\lambda_2=-1$, then $$ D=-1 $$ which would imply $A=-1$.</p> <p>As you know that $1\neq A\neq -1$, the rest of the solution in the OP follows.</p> Answer 113: <p>Because $A \ne I$ and $A \ne -I$, then there are vectors $x_1$, $x_2$ such that $$ y_1= (A-I)x_1 \ne 0 \\ y_2 = (A+I)x_2 \ne 0. $$ Because $A^2=I$, it follows that $(A+I)y_1=0$ or $Ay_1=-y_1$. Likewise $Ay_2=y_2$. So the vectors $y_1,y_2$ are a basis of eigenvectors of $A$ with eigenvalues $\pm 1$. So $A$ is similar to $\left[\begin{array}{cc} -1 &amp; 0 \\ 0 &amp; 1 \end{array}\right]$, which gives you $\mbox{tr}(A)=0$, $\mbox{det}(A)=-1$.</p> Answer 114: <p>From Cayley-Hamilton theorem we have that $ A^{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0\Leftrightarrow I_{2}-tr(A)\cdot A+det(A)\cdot I_{2}=0 (*) $. $ A^{2}=I_{2}\Leftrightarrow det(A^{2})=det(I_{2})\Leftrightarrow (det(A))^{2}=1 $, so $ det(A)=\pm 1 $. <br/> Suppose $ det(A)=1 $. From $ (*) $ we get $ tr(A)\cdot A=2I_{2}\Leftrightarrow tr(tr(A)\cdot A)=tr(2I_{2})\Leftrightarrow tr(A)\cdot tr(A)=2tr(I_{2})\Leftrightarrow (tr(A))^{2}=4, $ hence $ tr(A)=\pm2 $. <br/> If $ tr(A)=2 $ we obtain that $ 2A=2I_{2}\Leftrightarrow A=I_{2} $ , contradiction. <br/> If $ tr(A)=-2 $ we obtain that $ 2A=-2I_{2}\Leftrightarrow A=-I_{2} $ , contradiction.<br/> So $ det(A)=-1 $ which implies $ -tr(A)\cdot A=0 \Leftrightarrow tr(A)=0 $.</p>
Why was it necessary for the Riemann integral to consider all partitions and taggings?
2,086,460
10
https://math.stackexchange.com/questions/2086460/why-was-it-necessary-for-the-riemann-integral-to-consider-all-partitions-and-tag
<p>Suppose we have a function $f:[a,b]\to\mathbb{R}$. The Riemann integral $\int$ of the function is defined somewhat like this:</p> <blockquote> <p>For any tagged partition $\dot{P}=(P,t_P)$, (where $P$ is the partition, and $t_P=\{t_i\}$ the corresponding tagging) of $[a,b]$, denote the Riemann sum of $f$ corresponding to $\dot{P}$ by $S(f,\dot{P})$. If $S(f,\dot{P})$ approaches a definite value $I$ as $||\dot{P}||\to0$, then $$\int f = I$$</p> </blockquote> <p>By the definition, $\dot{P}$ runs over the set of all tagged partitions of [$a,b]$, taking into account <em>any</em> partitioning and <em>any</em> tagging for that particular partition. Is this much freedom of choice really necessary? From <a href="https://math.stackexchange.com/questions/1909523/riemann-integral-as-a-limit-of-sum">this question</a>, I know that if we specify the choices for both $P$ and $t$, the integral is no longer equivalent to the Riemann integral (In the link, $P$ is always an equipartition, and $t$ is the set of the left endpoints of $P$). What if we fix exactly one of $P,t$? Is the resulting "integral" still equivalent to the Riemann integral?</p> <p>For example, if I define another integral $\int_1$ similarly (but specifying choice of $P$):</p> <blockquote> <p>For $n\in\mathbb{N}$ define $P_n$ to be the equipartition of $[a,b]$ into $n$ subintervals of equal length, and for <em>any</em> tagging $t_{P_n}$ of $P_n$ denote the corresponding tagged partition as $\dot{P}_{n,t}$. If $S(f,\dot{P}_{n,t})$ approaches a definite value $I_1$ as $n\to \infty$ then $$\int_1f = I_1$$</p> </blockquote> <p>I can similarly define some other $\int_2$ by considering all partitions, but specifying $t$, for example, by taking the mid-points of each subinterval of a given partition. Will this $\int_1$ or $\int_2$ be equivalent to $\int$?</p> <p>More or less, what is the advantage of simultaneously varying $P$ as well as $t$, which couldn't be achieved by doing that one at a time?</p> <p>[Most probably my definitions are not extremely precise, I apologise for those]</p>
[ { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" }, { "answer_id": 2086998, "score": 8, "text": "<p>I expand my comment as an answer below.</p>\n\n<hr>\n\n<p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p>\n\n<p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p>\n\n<blockquote>\n <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\\epsilon &gt; 0$ there is a partition $P_{\\epsilon}$ of $[a, b]$ such that $$U(f, P_{\\epsilon}) - L(f, P_{\\epsilon}) &lt; \\epsilon$$ where for any partition $$P = \\{a = x_{0}, x_{1}, x_{2}, \\ldots, x_{n} = b\\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \\ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\\, L(f, P) = \\sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \\sup\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\},\\,m_{i} = \\inf\\,\\{f(x), x\\in [x_{i - 1}, x_{i}]\\}$$</em></p>\n</blockquote>\n\n<p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p>\n\n<p>Let's fix an $\\epsilon &gt; 0$. Let $P_{n} = \\{x_{0}, x_{1}, \\ldots, x_{n}\\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \\frac{\\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\\epsilon$ and not on the choice of tags with partition $P_{n}$.</p>\n\n<p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \\sup\\,\\{f(x), x \\in [x_{i - 1}, x_{i}]\\}$ it follows that there is a point $t_{i} \\in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \\frac{\\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \\sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \\sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \\frac{\\epsilon}{4} = S(f, P_{n}, t) + \\frac{\\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p>\n\n<p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \\frac{\\epsilon}{4}$$ We now have\n\\begin{align}\nU(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\\notag\\\\\n&amp;= |U(f, P_{n}) - S(f, P_{n}, t)\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\, + S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;\\leq |U(f, P_{n}) - S(f, P_{n}, t)|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t) - I|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|I - S(f, P_{n}, t')|\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,+|S(f, P_{n}, t') - L(f, P_{n})|\\notag\\\\\n&amp;&lt; \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4} + \\frac{\\epsilon}{4}\\notag\\\\\n&amp;= \\epsilon\\notag\n\\end{align}\nTherefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p>\n\n<hr>\n\n<p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p>\n\n<hr>\n\n<p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\\max(N_{1},N_{2})$.</p>\n" }, { "answer_id": 2086478, "score": 2, "text": "<p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p>\n\n<p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>\n" } ]
Question: Why was it necessary for the Riemann integral to consider all partitions and taggings? Answer 1: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 2: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 3: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 4: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 5: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 6: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 7: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 8: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 9: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 10: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 11: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 12: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 13: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 14: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 15: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 16: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 17: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 18: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 19: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 20: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 21: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 22: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 23: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 24: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 25: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 26: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 27: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 28: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 29: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 30: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 31: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 32: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 33: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 34: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 35: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 36: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p> Answer 37: <p>I expand my comment as an answer below.</p> <hr> <p>First of all it is obvious that if a function $f$ is Riemann integrable according to the standard definition (the one which involves choice of partition as well as tags) then it is also Riemann integrable according to the two alternative definitions suggested by OP.</p> <p>Further we know that the theory of Riemann integration is conveniently studied using Darboux sums instead of Riemann sums and in this connection we have the following result (proof is available in almost any real-analysis textbook):</p> <blockquote> <p><strong>Criterion for Riemann Integrability</strong>: <em>A function $f$ defined and bounded on $[a, b]$ is Riemann integrable on $[a, b]$ if and only if for every $\epsilon &gt; 0$ there is a partition $P_{\epsilon}$ of $[a, b]$ such that $$U(f, P_{\epsilon}) - L(f, P_{\epsilon}) &lt; \epsilon$$ where for any partition $$P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}, x_{i - 1} &lt; x_{i}, i = 1, 2, \ldots, n$$ the upper Darboux sum for $f$ (denoted by $U(f, P)$) and lower Darboux sum (denoted by $L(f, P)$) is given by $$U(f, P) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}),\, L(f, P) = \sum_{i = 1}^{n}m_{i}(x_{i} - x_{i - 1})$$ with $$M_{i} = \sup\,\{f(x), x\in [x_{i - 1}, x_{i}]\},\,m_{i} = \inf\,\{f(x), x\in [x_{i - 1}, x_{i}]\}$$</em></p> </blockquote> <p>Using this result it is possible to prove that if $f$ is Riemann integrable according to the first definition suggested by OP then it is Riemann integrable according to the standard accepted definition of Riemann integrability.</p> <p>Let's fix an $\epsilon &gt; 0$. Let $P_{n} = \{x_{0}, x_{1}, \ldots, x_{n}\}$ denote equipartition of $[a, b]$ so that $x_{i} = a + i(b - a)/n$. Let us assume that a function $f$ defined and bounded on $[a, b]$ is Riemann integrable according to the first definition given by OP. Then there exists a number $I$ and a positive integer $N$ such that $$|S(f, P_{n}) - I| &lt; \frac{\epsilon}{4}$$ for all $n &gt; N$. Here $S(f, P_{n})$ represents a Riemann sum for $f$ over equipartition $P_{n}$ with any arbitrary set of tags (the notation $S(f, P)$ does not explicitly indicate the dependence on tags and it will be augmented later to show this dependence). The important thing to note here is that $N$ depends only on $\epsilon$ and not on the choice of tags with partition $P_{n}$.</p> <p>In what follows let us fix a positive integer $n &gt; N$. Since $M_{i} = \sup\,\{f(x), x \in [x_{i - 1}, x_{i}]\}$ it follows that there is a point $t_{i} \in [x_{i - 1}, x_{i}]$ such that $$M_{i} - f(t_{i}) &lt; \frac{\epsilon}{4(b - a)}$$ and hence $$U(f,P_{n}) = \sum_{i = 1}^{n}M_{i}(x_{i} - x_{i - 1}) &lt; \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) + \frac{\epsilon}{4} = S(f, P_{n}, t) + \frac{\epsilon}{4}$$ where $S(f, P, t)$ is a Riemann sum for $f$ over partition $P$ and tags $t_{i}$.</p> <p>In a similar manner we can show that there is another set of tags $t_{i}'$ for partition $P_{n}$ such that $$L(f, P_{n}) &gt; S(f, P_{n}, t') - \frac{\epsilon}{4}$$ We now have \begin{align} U(f, P_{n}) - L(f, P_{n}) &amp;= |U(f, P_{n}) - L(f, P_{n})|\notag\\ &amp;= |U(f, P_{n}) - S(f, P_{n}, t)\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t) - I + I - S(f, P_{n}, t')\notag\\ &amp;\,\,\,\,\,\,\,\, + S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;\leq |U(f, P_{n}) - S(f, P_{n}, t)|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t) - I|\notag\\ &amp;\,\,\,\,\,\,\,\,+|I - S(f, P_{n}, t')|\notag\\ &amp;\,\,\,\,\,\,\,\,+|S(f, P_{n}, t') - L(f, P_{n})|\notag\\ &amp;&lt; \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{4}\notag\\ &amp;= \epsilon\notag \end{align} Therefore by criterion for Riemann integrability $f$ is Riemann integrable on $[a, b]$ according to the standard definition of Riemann integrability.</p> <hr> <p>On the other hand the second definition suggested by OP (which fixes the tags for a given partition) does not seem amenable to an analysis like the one given in preceding paragraphs. I have not been able to figure out a counter-example to show that this definition is not equivalent to the standard definition. However it does appear that the choice of tags is essential to link the Riemann sums with upper and lower Darboux sums and this is probably an essential ingredient in the definition of Riemann integral.</p> <hr> <p><strong>Update</strong>: In the proof presented above I nowhere use the fact that $P_{n} $ is a partition of specific form where all sub-intervals are of equal length. In particular we don't assume anything about norm of $P_{n} $. All I assume is that there is a sequence of such partitions for which the Riemann sums converge to $I$. What is assumed is that the number $I$ does not depend on choice of tags, but the speed of convergence may depend on choice of tags. This will make $N$ in the proof to be dependent on the choice of tags. But this is not a problem as we are dealing with only two choices of tags so that we can have $N_{1},N_{2}$ in place of $N$ and choose $N=\max(N_{1},N_{2})$.</p> Answer 38: <p>You can't define a tagging without saying what partition is being tagged, so the notion of choosing the tagging first is not viable.</p> <p>Now the wuestion is whether one could define an integral based on just the partitioning, without the tagging, and have it come out equivalent to the Reimann integral. If you consider fucntions which have different behavior on the rationals than on the irrationals, you will see that this will not work.</p>
Can this enumeration problem be generalized? (counting $20$-subsets of $\{1,2,3,\dots 30\}$ with no three consecutive elements)
2,089,768
10
https://math.stackexchange.com/questions/2089768/can-this-enumeration-problem-be-generalized-counting-20-subsets-of-1-2-3
<p>I found a problem in a brazilian contest that goes as follows:</p> <p>In how many ways can we select $20$ elements among $\{1,2,3,\dots, 30\}$ such that no three consecutive elements are chosen?</p> <p>My solution:</p> <blockquote class="spoiler"> <p> Split the elements into groups of $3: \{1,2,3\},\{4,5,6\}\dots \{28,29,30\}$ and notice that we must pick two from each group. There are three options for each group, and we call the group a "left", "center" or "right" group depending on the "missing" element. Notice that a "left" group cannot appear to the left of a "center" or "right" group, and a right group cannot appear to the right of a "center" or "left" group. Hence, selecting the quantity of each kind of group uniquely determines the arrangements, so the answer is the number of solutions to $l+c+r=30$ in non-negative integers, which is $\binom{32}{2}$ by stars and bars.</p> </blockquote> <p>This solution clearly used that $\frac{20}{30}=\frac{2}{3}$. Is it possible to solve it for $k$-subsets of $[n]$ such that no three elements are consecutive? (I am especially interested in whether the answer can be computed efficiently, it's probably possible to do this with a $2$-dimensional recurrence but that doesn't seem very efficient)</p>
[ { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" }, { "answer_id": 2091272, "score": 2, "text": "<p>Suppose $n$ is the number of elements and $k$ the size of the subset.\nThe idea here is to construct a generating function by choosing the\nfirst element of each subset and thereafter appending a series of gaps\nbetween adjacent values, marking runs of consecutive elements. We use\n$w$ for these runs and $u$ to mark runs of at least three consecutive\nelements and $z$ for ordinary gaps of size at least two. We also use\n$v$ to count elements. We then have from first principles the OGF</p>\n\n<p>$$f(z,w,u,v) = \\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\\cdots) + 1.$$</p>\n\n<p>As a sanity check we should get all subsets from</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,1,1)$$</p>\n\n<p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets\nare classifified according to the last element). We obtain</p>\n\n<p>$$\\frac{z}{1-z} \\sum_{q\\ge 0} \\frac{z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} (w+w^2+w^3+w^4+\\cdots)^p\n\\left(\\sum_{q\\ge 1} \\frac{z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+w+w^2+w^3+w^4+\\cdots) + 1.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{z}{1-z} \\frac{1}{1-z^2/(1-z)}\n\\left(\\sum_{p\\ge 0} \\frac{w^p}{(1-w)^p}\n\\left(\\frac{z^2/(1-z)}{1-z^2/(1-z)}\\right)^p\\right)\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))}\n\\frac{1}{1-w} + 1\n\\\\ = \\frac{z}{1-z-z^2}\n\\frac{1}{1-wz^2/(1-w)/(1-z-z^2)}\n\\frac{1}{1-w} + 1\n\\\\ = z\n\\frac{1}{1-z-z^2-wz^2/(1-w)}\n\\frac{1}{1-w} + 1\n= z \\frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p>\n\n<p>To conclude we put $w=z$ and get</p>\n\n<p>$$z \\frac{1}{(1-z-z^2)(1-z)-z^3} + 1\n= z \\frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1\n\\\\ = z \\frac{1}{1-2z} + 1 = \\frac{1-z}{1-2z}.$$</p>\n\n<p>We see that on multiplying by $1/(1-z)$ we obtain</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\\frac{1}{1-2z}.}$$</p>\n\n<p>There are $2^n$ subsets and the sanity check goes through.\n<P></p>\n\n<p>Now to eliminate subsets containing three consecutive elements we\ncompute</p>\n\n<p>$$\\frac{1}{1-z} f(z,z,0,v)$$</p>\n\n<p>which yields</p>\n\n<p>$$\\frac{vz}{1-z}\\sum_{q\\ge 0} \\frac{v^q z^{2q}}{(1-z)^q}\n\\\\ \\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\sum_{q\\ge 1} \\frac{v^q z^{2q}}{(1-z)^q}\\right)^p\\right)\n\\\\ \\times (1+vw) + 1$$</p>\n\n<p>which is</p>\n\n<p>$$\\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\\sum_{p\\ge 0} v^p w^p\n\\left(\\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\\right)^p\\right)\n\\times (1+vw) + 1\n\\\\ = \\frac{vz}{1-z} \\frac{1}{1-vz^2/(1-z)}\n\\times \\left(\n\\frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\\right)\n\\\\ \\times (1+vw) + 1\n\\\\ = \\frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1\n\\\\ = \\frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p>\n\n<p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \\frac{1}{1-z}.$$</p>\n\n<p>The second term only contributes for $k=0$ while the first does not\ncontribute when $k=0.$ This means there is one subset of zero elements\nnot containing three consecutive elements, namely the empty set, and\nthis is obviously true.</p>\n\n<p><P>Simplifying we find</p>\n\n<p>$$\\frac{1}{1-z}\\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n+ \\frac{1}{1-z}\n\\\\ = \\left(1-\\frac{1}{z}\\right)\\frac{1}{1-z}\n+ \\frac{1}{z} \\frac{1}{1-z-vz^2 - v^2z^3} \n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \n\\frac{1}{1-z-vz^2 - v^2z^3}\n\\\\ = -\\frac{1}{z} + \\frac{1}{z} \\frac{1}{1-z}\n\\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p>\n\n<p>Observe that</p>\n\n<p>$$[v^k] \\frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}\n= \\sum_{q=0}^k [v^k] \\frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q\n\\\\ = \\sum_{q=0}^k \\frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q\n= \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{2q}}{(1-z)^q} z^{k-q}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}.$$</p>\n\n<p>The coefficient extraction in $z$ now proceeds as follows:</p>\n\n<p>$$[z^n] \\frac{1}{z}\\frac{1}{1-z} \n\\sum_{q=0}^k {q\\choose k-q} \\frac{z^{k+q}}{(1-z)^q}\n= [z^{n+1}] \\sum_{q=0}^k {q\\choose k-q} \n\\frac{z^{k+q}}{(1-z)^{q+1}}\n\\\\ = \\sum_{q=0}^k {q\\choose k-q} \n[z^{n+1-k-q}] \\frac{1}{(1-z)^{q+1}}.$$</p>\n\n<p>We have the closed form which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{q=0}^k {q\\choose k-q} {n+1-k\\choose q}.}$$</p>\n\n<p>Listing these values in a triangular format we find</p>\n\n<p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \n\\\\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \n\\\\ 8, 28, 50, 45, 16, 1, 0, 0, \\ldots$$</p>\n\n<p>which points us to <a href=\"https://oeis.org/A078802\" rel=\"nofollow noreferrer\">OEIS A078802</a> where the\nabove analysis is confirmed.</p>\n\n<p><P>The above computation was supported by the following Maple code.</p>\n\n<pre>\nwith(combinat);\n\nENUM :=\nproc(n, k)\n option remember;\n local list, pos, res;\n\n if k &lt; 3 then return binomial(n,k) fi;\n\n res := 0;\n\n for list in choose(n, k) do\n for pos to k-2 do\n if list[pos] + 1 = list[pos + 1] and\n list[pos + 1] + 1 = list[pos+2] then\n break;\n fi;\n od;\n\n if pos = k-1 then\n res := res + 1;\n fi;\n od;\n\n res;\nend;\n\nX1 := (n, k) -&gt;\ncoeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3),\n v=0, k), z=0, n);\n\nX2 := (n, k) -&gt;\nadd(binomial(q,k-q)*binomial(n+1-k,q), q=0..k);\n</pre>\n\n<p>In particular for subsets of twenty elements of a set containing the\ninteger range from one to thirty the number of subsets without three\nconsecutive elements is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{66.}$$</p>\n" }, { "answer_id": 2089931, "score": 1, "text": "<p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p>\n\n<p>Any pattern of that description can be constructed out of tiles looking like this:</p>\n\n<p>$$\\blacksquare\\blacksquare\\square\\qquad\\blacksquare\\square\\qquad\\square$$</p>\n\n<p>(The tiles cannot be rotated or flipped. The last square is always $\\square$, so it can be ignored.)</p>\n\n<p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p>\n\n<p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\\binom{a+b+c}{a,\\ b,\\ c}=\\frac{(a+b+c)!}{a!\\ b!\\ c!}$.</p>\n\n<p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p>\n\n<p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p>\n\n<p>Thus the number of arrangements for specific $p$ is the trinomial:</p>\n\n<p>$$\\binom{n-k+1}{p,\\ k-2p,\\ n-2k+p+1}$$</p>\n\n<p>And we have to sum that over all relevant $p$.</p>\n" }, { "answer_id": 2945543, "score": 0, "text": "<p>Let <span class=\"math-container\">$m$</span> be a fixed positive integer. We say that a subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U:=\\{1,2,\\ldots,3m\\}$</span> is <em>eligible</em> if <span class=\"math-container\">$|S|=2m$</span> and <span class=\"math-container\">$S$</span> does not contain <span class=\"math-container\">$3$</span> consecutive elements of <span class=\"math-container\">$U$</span>. Let <span class=\"math-container\">$A_k:=\\{3k-2,3k-1,3k\\}$</span> for <span class=\"math-container\">$k=1,2,\\ldots,m$</span>. We note that each eligible subset <span class=\"math-container\">$S$</span> of <span class=\"math-container\">$U$</span> must omit exactly one element from each <span class=\"math-container\">$A_k$</span>. We say that <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span> is <em>bad</em> with respect to <span class=\"math-container\">$S$</span> if <span class=\"math-container\">$3k-2\\notin S$</span>. (Obviously, <span class=\"math-container\">$m+1$</span> is always bad for any eligible subset <span class=\"math-container\">$S$</span>)</p>\n\n<p>Observe that, if <span class=\"math-container\">$k$</span> is bad with respect to <span class=\"math-container\">$S$</span>, then any <span class=\"math-container\">$j\\in\\{k,k+1,k+2,\\ldots,m+1\\}$</span> is also bad with respect to <span class=\"math-container\">$S$</span>. For <span class=\"math-container\">$k\\in\\{1,2,\\ldots,m+1\\}$</span>, write <span class=\"math-container\">$N_k$</span> for the number of eligible subsets <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$k$</span> is the smallest bad integer with respect to <span class=\"math-container\">$S$</span>. It can be easily seen that\n<span class=\"math-container\">$$N_k=k$$</span>\nbecause, for some <span class=\"math-container\">$t\\in\\{0,1,2,\\ldots,k-1\\}$</span>, <span class=\"math-container\">$S$</span> must omit integers of the form <span class=\"math-container\">$3r$</span> for <span class=\"math-container\">$r\\in\\{1,2,\\ldots,t\\}$</span> and omit integers of the form <span class=\"math-container\">$3r-1$</span> for <span class=\"math-container\">$r\\in \\{t+1,t+2,\\ldots,k-1\\}$</span>. Thus, the total number of eligible subsets <span class=\"math-container\">$S$</span> is\n<span class=\"math-container\">$$F(m):=\\sum_{k=1}^{m+1}\\,N_k=\\sum_{k=1}^{m+1}\\,k=\\frac{(m+1)(m+2)}{2}\\,.$$</span>\nIn particular,\n<span class=\"math-container\">$$F(10)=\\frac{11\\cdot 12}{2}=66\\,.$$</span></p>\n" } ]
Question: Can this enumeration problem be generalized? (counting $20$-subsets of $\{1,2,3,\dots 30\}$ with no three consecutive elements) Answer 1: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 2: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 3: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 4: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 5: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 6: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 7: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 8: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 9: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 10: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 11: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 12: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 13: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 14: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 15: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 16: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 17: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 18: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 19: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 20: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 21: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 22: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 23: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 24: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 25: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 26: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 27: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 28: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 29: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 30: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 31: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 32: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 33: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 34: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 35: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 36: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 37: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 38: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 39: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 40: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 41: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 42: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 43: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 44: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 45: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 46: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 47: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 48: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 49: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 50: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 51: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 52: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 53: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 54: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p> Answer 55: <p>Suppose $n$ is the number of elements and $k$ the size of the subset. The idea here is to construct a generating function by choosing the first element of each subset and thereafter appending a series of gaps between adjacent values, marking runs of consecutive elements. We use $w$ for these runs and $u$ to mark runs of at least three consecutive elements and $z$ for ordinary gaps of size at least two. We also use $v$ to count elements. We then have from first principles the OGF</p> <p>$$f(z,w,u,v) = \frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw+uv^2w^2+uv^3w^3+uv^4w^4+\cdots) + 1.$$</p> <p>As a sanity check we should get all subsets from</p> <p>$$\frac{1}{1-z} f(z,z,1,1)$$</p> <p>(the multiplier by $1/(1-z)$ sums the contributions as these subsets are classifified according to the last element). We obtain</p> <p>$$\frac{z}{1-z} \sum_{q\ge 0} \frac{z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} (w+w^2+w^3+w^4+\cdots)^p \left(\sum_{q\ge 1} \frac{z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+w+w^2+w^3+w^4+\cdots) + 1.$$</p> <p>This simplifies to</p> <p>$$\frac{z}{1-z} \frac{1}{1-z^2/(1-z)} \left(\sum_{p\ge 0} \frac{w^p}{(1-w)^p} \left(\frac{z^2/(1-z)}{1-z^2/(1-z)}\right)^p\right) \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-z)/(1-w)/(1-z^2/(1-z))} \frac{1}{1-w} + 1 \\ = \frac{z}{1-z-z^2} \frac{1}{1-wz^2/(1-w)/(1-z-z^2)} \frac{1}{1-w} + 1 \\ = z \frac{1}{1-z-z^2-wz^2/(1-w)} \frac{1}{1-w} + 1 = z \frac{1}{(1-z-z^2)(1-w)-wz^2} + 1.$$</p> <p>To conclude we put $w=z$ and get</p> <p>$$z \frac{1}{(1-z-z^2)(1-z)-z^3} + 1 = z \frac{1}{1-z-z^2-z+z^2+z^3-z^3} + 1 \\ = z \frac{1}{1-2z} + 1 = \frac{1-z}{1-2z}.$$</p> <p>We see that on multiplying by $1/(1-z)$ we obtain</p> <p>$$\bbox[5px,border:2px solid #00A000]{\frac{1}{1-2z}.}$$</p> <p>There are $2^n$ subsets and the sanity check goes through. <P></p> <p>Now to eliminate subsets containing three consecutive elements we compute</p> <p>$$\frac{1}{1-z} f(z,z,0,v)$$</p> <p>which yields</p> <p>$$\frac{vz}{1-z}\sum_{q\ge 0} \frac{v^q z^{2q}}{(1-z)^q} \\ \times \left(\sum_{p\ge 0} v^p w^p \left(\sum_{q\ge 1} \frac{v^q z^{2q}}{(1-z)^q}\right)^p\right) \\ \times (1+vw) + 1$$</p> <p>which is</p> <p>$$\frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left(\sum_{p\ge 0} v^p w^p \left(\frac{vz^2/(1-z)}{1-vz^2/(1-z)}\right)^p\right) \times (1+vw) + 1 \\ = \frac{vz}{1-z} \frac{1}{1-vz^2/(1-z)} \times \left( \frac{1}{1-v^2wz^2/(1-z)/(1-vz^2/(1-z))}\right) \\ \times (1+vw) + 1 \\ = \frac{vz(1+vw)}{(1-z)(1-vz^2/(1-z)) - v^2wz^2} + 1 \\ = \frac{vz(1+vw)}{1-z-vz^2 - v^2wz^2} + 1.$$</p> <p>On replacing $w$ by $z$ and multiplying by $1/(1-z)$we thus have</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z}.$$</p> <p>The second term only contributes for $k=0$ while the first does not contribute when $k=0.$ This means there is one subset of zero elements not containing three consecutive elements, namely the empty set, and this is obviously true.</p> <p><P>Simplifying we find</p> <p>$$\frac{1}{1-z}\frac{vz+v^2z^2-(1-z)/z}{1-z-vz^2 - v^2z^3} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} + \frac{1}{1-z} \\ = \left(1-\frac{1}{z}\right)\frac{1}{1-z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z-vz^2 - v^2z^3} \\ = -\frac{1}{z} + \frac{1}{z} \frac{1}{1-z} \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)}.$$</p> <p>Observe that</p> <p>$$[v^k] \frac{1}{1-vz^2/(1-z) - v^2z^3/(1-z)} = \sum_{q=0}^k [v^k] \frac{v^q z^{2q}}{(1-z)^q} (1+vz)^q \\ = \sum_{q=0}^k \frac{z^{2q}}{(1-z)^q} [v^{k-q}] (1+vz)^q = \sum_{q=0}^k {q\choose k-q} \frac{z^{2q}}{(1-z)^q} z^{k-q} \\ = \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q}.$$</p> <p>The coefficient extraction in $z$ now proceeds as follows:</p> <p>$$[z^n] \frac{1}{z}\frac{1}{1-z} \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^q} = [z^{n+1}] \sum_{q=0}^k {q\choose k-q} \frac{z^{k+q}}{(1-z)^{q+1}} \\ = \sum_{q=0}^k {q\choose k-q} [z^{n+1-k-q}] \frac{1}{(1-z)^{q+1}}.$$</p> <p>We have the closed form which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{q=0}^k {q\choose k-q} {n+1-k\choose q}.}$$</p> <p>Listing these values in a triangular format we find</p> <p>$$1, 1, 1, 1, 2, 1, 1, 3, 3, 0, 1, 4, 6, 2, 0, 1, 5, 10, 7, 1, \\ 0, 1, 6, 15, 16, 6, 0, 0, 1, 7, 21, 30, 19, 3, 0, 0, 1, \\ 8, 28, 50, 45, 16, 1, 0, 0, \ldots$$</p> <p>which points us to <a href="https://oeis.org/A078802" rel="nofollow noreferrer">OEIS A078802</a> where the above analysis is confirmed.</p> <p><P>The above computation was supported by the following Maple code.</p> <pre> with(combinat); ENUM := proc(n, k) option remember; local list, pos, res; if k &lt; 3 then return binomial(n,k) fi; res := 0; for list in choose(n, k) do for pos to k-2 do if list[pos] + 1 = list[pos + 1] and list[pos + 1] + 1 = list[pos+2] then break; fi; od; if pos = k-1 then res := res + 1; fi; od; res; end; X1 := (n, k) -&gt; coeftayl(coeftayl(-1/z+1/z*1/(1-z-v*z^2-v^2*z^3), v=0, k), z=0, n); X2 := (n, k) -&gt; add(binomial(q,k-q)*binomial(n+1-k,q), q=0..k); </pre> <p>In particular for subsets of twenty elements of a set containing the integer range from one to thirty the number of subsets without three consecutive elements is</p> <p>$$\bbox[5px,border:2px solid #00A000]{66.}$$</p> Answer 56: <p>Suppose that we have a row of $n$ squares, and we have to shade in $k$ of them, without shading three in a row.</p> <p>Any pattern of that description can be constructed out of tiles looking like this:</p> <p>$$\blacksquare\blacksquare\square\qquad\blacksquare\square\qquad\square$$</p> <p>(The tiles cannot be rotated or flipped. The last square is always $\square$, so it can be ignored.)</p> <p>Assume that we have $p$ of the first kind of tile, and $k-2p$ of the second kind, and any number of the third kind. Then the problem is equivalent to lining up those tiles in a row of length $n+1$.</p> <p>As there are three kinds of tiles, we know by elementary counting that the number of arrangements will be of the form $\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\ b!\ c!}$.</p> <p>The total number of tiles used must be equal to the number of squares, minus the number of tiles of the second kind, minus twice the number of tiles of the first kind:</p> <p>$$(n+1)-(k-2p)-2(p)=n-k+1$$</p> <p>Thus the number of arrangements for specific $p$ is the trinomial:</p> <p>$$\binom{n-k+1}{p,\ k-2p,\ n-2k+p+1}$$</p> <p>And we have to sum that over all relevant $p$.</p> Answer 57: <p>Let <span class="math-container">$m$</span> be a fixed positive integer. We say that a subset <span class="math-container">$S$</span> of <span class="math-container">$U:=\{1,2,\ldots,3m\}$</span> is <em>eligible</em> if <span class="math-container">$|S|=2m$</span> and <span class="math-container">$S$</span> does not contain <span class="math-container">$3$</span> consecutive elements of <span class="math-container">$U$</span>. Let <span class="math-container">$A_k:=\{3k-2,3k-1,3k\}$</span> for <span class="math-container">$k=1,2,\ldots,m$</span>. We note that each eligible subset <span class="math-container">$S$</span> of <span class="math-container">$U$</span> must omit exactly one element from each <span class="math-container">$A_k$</span>. We say that <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span> is <em>bad</em> with respect to <span class="math-container">$S$</span> if <span class="math-container">$3k-2\notin S$</span>. (Obviously, <span class="math-container">$m+1$</span> is always bad for any eligible subset <span class="math-container">$S$</span>)</p> <p>Observe that, if <span class="math-container">$k$</span> is bad with respect to <span class="math-container">$S$</span>, then any <span class="math-container">$j\in\{k,k+1,k+2,\ldots,m+1\}$</span> is also bad with respect to <span class="math-container">$S$</span>. For <span class="math-container">$k\in\{1,2,\ldots,m+1\}$</span>, write <span class="math-container">$N_k$</span> for the number of eligible subsets <span class="math-container">$S$</span> such that <span class="math-container">$k$</span> is the smallest bad integer with respect to <span class="math-container">$S$</span>. It can be easily seen that <span class="math-container">$$N_k=k$$</span> because, for some <span class="math-container">$t\in\{0,1,2,\ldots,k-1\}$</span>, <span class="math-container">$S$</span> must omit integers of the form <span class="math-container">$3r$</span> for <span class="math-container">$r\in\{1,2,\ldots,t\}$</span> and omit integers of the form <span class="math-container">$3r-1$</span> for <span class="math-container">$r\in \{t+1,t+2,\ldots,k-1\}$</span>. Thus, the total number of eligible subsets <span class="math-container">$S$</span> is <span class="math-container">$$F(m):=\sum_{k=1}^{m+1}\,N_k=\sum_{k=1}^{m+1}\,k=\frac{(m+1)(m+2)}{2}\,.$$</span> In particular, <span class="math-container">$$F(10)=\frac{11\cdot 12}{2}=66\,.$$</span></p>
Is it a convention that the word &quot;where&quot; following a mathematical formula needs a comma before it?
2,091,528
10
https://math.stackexchange.com/questions/2091528/is-it-a-convention-that-the-word-where-following-a-mathematical-formula-needs
<p>I am not a native english speaker. I learnt about defining and non-defining relative clauses from english grammar books. Grammar books tell me not to use commas in defining relative clauses, so I don't understand why there is a comma preceding &quot;where&quot; in these examples:</p> <p>From <strong>Theorem 4.59</strong> (Sylow Theorems) in <em>Anthony W. Knapp's Basic Algebra, Digital Second Edition</em>:</p> <blockquote> <p>Let <span class="math-container">$G$</span> be a finite group of order <span class="math-container">$p^mr$</span>, where <span class="math-container">$p$</span> is prime and <span class="math-container">$p$</span> does not divide <span class="math-container">$r.$</span></p> </blockquote> <p>From <em>Hungerford's Algebra</em>:</p> <blockquote> <p><strong>Theorem 6.7</strong> (Fundamental Theorem of Arithmetic) Any positive integer <span class="math-container">$n \gt 1$</span> may be written uniquely in the form <span class="math-container">$n = p_1^{t_1}p_2^{t_2} \cdots p_k^{t_k}$</span>, where <span class="math-container">$p_1 \lt p_2 \lt \cdots \lt p_k$</span> are primes and <span class="math-container">$t_i \gt 0$</span> for all <span class="math-container">$i.$</span></p> </blockquote> <p>I think the clause &quot;where <span class="math-container">$p_1 \lt p_2 \lt \cdots \lt p_k$</span> are primes and <span class="math-container">$t_i \gt 0$</span> for all <span class="math-container">$i$</span>&quot; is a defining relative clause, since it gives essential information about the form <span class="math-container">$n = p_1^{t_1}p_2^{t_2} \cdots p_k^{t_k}.$</span></p> <p>I looked up the word &quot;where&quot; in three mathematical textbooks, and in similar situations, they all use commas between formulae and the words &quot;where&quot;. Is this a convention just in mathematical writing?</p>
[ { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" }, { "answer_id": 2667652, "score": 3, "text": "<p>Consider an example:</p>\n<p>(1)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span>, where <span class=\"math-container\">$g(y)\\in B$</span> &quot;</p>\n<p>(2)<span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> where <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>The first statement could be rewritten, with a change of emphasis, as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$(x,y)\\in A\\times \\{y:g(y)\\in B\\}$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> &quot;.</p>\n<p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p>\n<p><span class=\"math-container\">$\\qquad$</span>&quot;We can find <span class=\"math-container\">$x\\in A$</span> such that <span class=\"math-container\">$f(x,y)=0$</span> whenever <span class=\"math-container\">$g(y)\\in B$</span> &quot;.</p>\n<p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p>\n<p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class=\"math-container\">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class=\"math-container\">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>\n" } ]
Question: Is it a convention that the word &quot;where&quot; following a mathematical formula needs a comma before it? Answer 1: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 2: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 3: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 4: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 5: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 6: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 7: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 8: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 9: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 10: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 11: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 12: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 13: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 14: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 15: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 16: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 17: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 18: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p> Answer 19: <p>Consider an example:</p> <p>(1)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span>, where <span class="math-container">$g(y)\in B$</span> &quot;</p> <p>(2)<span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> where <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>The first statement could be rewritten, with a change of emphasis, as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$(x,y)\in A\times \{y:g(y)\in B\}$</span> such that <span class="math-container">$f(x,y)=0$</span> &quot;.</p> <p>The second statement has the same structure as &quot;There is fire where there is smoke&quot; and might be interpreted as</p> <p><span class="math-container">$\qquad$</span>&quot;We can find <span class="math-container">$x\in A$</span> such that <span class="math-container">$f(x,y)=0$</span> whenever <span class="math-container">$g(y)\in B$</span> &quot;.</p> <p>Usually in mathematics, this latter type of interpretation is unintended, and the comma is needed.</p> <p>Ideally, all notation should be defined before it is used. However, it often happens that the defining condition—for example, &quot;where <span class="math-container">$c$</span> is some positive constant&quot;—is not the focus of interest of the statement; so we may not wish to preface our statement as &quot;There is some positive constant <span class="math-container">$c$</span> such that ... &quot;. This is especially the case when the notation and condition are routine and conventional. In such cases, the <em>where</em> clause (preceded, of course, by a comma!) is unobjectionable.</p>
Help to Prove that $\int_{0}^{\pi\over 4}\arctan{(\cot^2{x})}\mathrm dx={2\pi^2-\ln^2({3+2\sqrt{2})}\over 16}$
2,092,967
10
https://math.stackexchange.com/questions/2092967/help-to-prove-that-int-0-pi-over-4-arctan-cot2x-mathrm-dx-2-pi2
<p>I need help on proving $(1)$. $$I=\int_{0}^{\pi\over 4}\arctan{(\cot^2{x})}\mathrm dx={2\pi^2-\ln^2({3+2\sqrt{2})}\over 16}\tag1$$ This is what I have attempted;</p> <p>Enforcing a sub: $u=\cot^2{x}$ then $du=-2\cot{x}\csc^2{x}dx$</p> <p>Recall $1+\cot^2{x}=\csc^2{x}$</p> <p>$$I={1\over2}\int_{1}^{\infty}\arctan{u}\cdot{\mathrm dx\over u^{1/2}+u^{3/2}}$$</p> <p>Recall $u^3+1=(u+1)(u^2-u+1)$</p> <p>$$I={1\over2}\int_{1}^{\infty}\arctan{u}\left({A\over u^{1/2}}+{B\over u+1}+{Cu+D\over u^2-u+1}\right)\mathrm du$$</p> <p>I am stuck at this point.</p> <p>Can anyone help to prove $(1)$?</p>
[ { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" }, { "answer_id": 2092987, "score": 13, "text": "<p>We first write $I$ as</p>\n\n<p>$$ I= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\left( \\arctan(\\cot^2 x) - \\arctan(1) \\right) \\, dx. $$</p>\n\n<p>Now using addition formulas for $\\arctan$ and $\\cos$, we have</p>\n\n<p>$$ \\arctan(\\cot^2 x) - \\arctan(1)\n= \\arctan\\left(\\frac{\\cot^2 x - 1}{\\cot^2 x + 1} \\right)\n= \\arctan(\\cos 2x). $$</p>\n\n<p>Consequently we have</p>\n\n<p>\\begin{align*}\nI\n&amp;= \\frac{\\pi^2}{16} + \\int_{0}^{\\frac{\\pi}{4}} \\arctan(\\cos 2x) \\, dx \\\\\n&amp;= \\frac{\\pi^2}{16} + \\frac{1}{2}\\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta,\n\\end{align*}</p>\n\n<p>where the last line follows from the substitution $\\theta = \\frac{\\pi}{2} - 2x$. The last integral can be computed in terms of the <a href=\"http://mathworld.wolfram.com/LegendresChi-Function.html\" rel=\"nofollow noreferrer\"><em>Legendre chi function $\\chi_2$</em></a>:</p>\n\n<p>$$ \\int_{0}^{\\frac{\\pi}{2}} \\arctan(\\sin \\theta) \\, d\\theta = 2\\chi_2(\\sqrt{2}-1). \\tag{1} $$</p>\n\n<p>For a proof of $\\text{(1)}$, see <a href=\"https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx\">my previous answer</a> for instance. There are only a handful of known special values of $\\chi_2$, but thankfully</p>\n\n<p>$$\\chi_2(\\sqrt{2}-1) = \\frac{\\pi^2}{16} - \\frac{1}{4}\\log^2(\\sqrt{2}+1) \\tag{2} $$</p>\n\n<p>is one of them. Summarizing, we have</p>\n\n<p>$$ I = \\frac{\\pi^2}{8} - \\frac{1}{4}\\log^2(\\sqrt{2}+1), $$</p>\n\n<p>which coincides with the proposed answer.</p>\n\n<hr>\n\n<p><strong>Addendum.</strong> The identity $\\text{(2)}$ follows by plugging $x = \\sqrt{2}-1$ to the identity</p>\n\n<p>$$ \\chi_2\\left(\\frac{1-x}{1+x}\\right) + \\chi_2(x) = \\frac{\\pi^2}{8} - \\frac{1}{2}\\log x \\log\\left(\\frac{1-x}{1+x}\\right), $$</p>\n\n<p>which can be easily checked by differentiating both sides.</p>\n" }, { "answer_id": 2095126, "score": 5, "text": "<p>For the evaluation of $\\displaystyle J=\\int_0^{\\tfrac{\\pi}{2}}\\arctan(\\sin x)dx$</p>\n\n<p>Perform the change of variable $y=\\sin x$,</p>\n\n<p>$\\displaystyle J=\\int_0^1 \\dfrac{\\arctan x}{\\sqrt{1-x^2}}dx$</p>\n\n<p>Perform the change of variable $y=\\sqrt{\\dfrac{1-x}{1+x}}$,</p>\n\n<p>$\\begin{align}\\displaystyle J&amp;=2\\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1-x^2}{1+x^2}\\right)}{1+x^2}dx\\\\\n&amp;=2\\int_0^1 \\dfrac{\\arctan(1)}{1+x^2}dx-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n&amp;=\\dfrac{\\pi^2}{8}-2\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx\\\\\n\\end{align}$</p>\n\n<p>$\\begin{align}\n\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx&amp;=\\Big[\\arctan x\\arctan(x^2)\\Big]_0^1-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n&amp;=\\dfrac{\\pi^2}{16}-\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\end{align}$</p>\n\n<p>Since,</p>\n\n<p>$\\displaystyle \\arctan x=\\int_0^1 \\dfrac{x}{1+t^2x^2}dx$ </p>\n\n<p>then,</p>\n\n<p>$\\begin{align}\n\\displaystyle K&amp;=\\int_0^1 \\dfrac{2x\\arctan x}{1+x^4}dx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\\\\n\\displaystyle &amp;=\\int_0^1\\int_0^1 \\left(\\dfrac{2t^2}{(1+t^4)(1+x^4)}+\\dfrac{2x^2}{(1+x^4)(1+t^4)}-\\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\\right)dtdx\\\\\n&amp;=\\displaystyle 4\\left(\\int_0^1 \\dfrac{t^2}{1+t^4}dt\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)-K\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\displaystyle K=2\\left(\\int_0^1 \\dfrac{x^2}{1+x^4}dx\\right)\\left(\\int_0^1 \\dfrac{1}{1+x^4}dx\\right)$</p>\n\n<p>Since,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{x^2}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2-\\sqrt{2}x+1}{x^2+\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi+\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>and,</p>\n\n<p>$\\begin{align}\\displaystyle \\int_0^1 \\dfrac{1}{1+x^4}dx&amp;=\\left[\\dfrac{1}{4\\sqrt{2}}\\ln\\left(\\dfrac{x^2+\\sqrt{2}x+1}{x^2-\\sqrt{2}x+1}\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x+1\\right)+\\dfrac{1}{2\\sqrt{2}}\\arctan\\left(\\sqrt{2}x-1\\right)\\right]_0^1\\\\\n&amp;=\\dfrac{1}{4\\sqrt{2}}\\Big(\\pi-\\ln\\left(3-2\\sqrt{2}\\right)\\Big)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{K=\\displaystyle \\dfrac{\\pi^2}{16}-\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle\\int_0^1 \\dfrac{\\arctan(x^2)}{1+x^2}dx=\\dfrac{1}{16}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{\\displaystyle J=\\dfrac{\\pi^2}{8}-\\dfrac{1}{8}\\Big(\\ln\\left(3-2\\sqrt{2}\\right)\\Big)^2}$</p>\n\n<p><strong>Addendum</strong>:</p>\n\n<p>$\\displaystyle I=\\int_0^{\\tfrac{\\pi}{4}} \\arctan\\left(\\cot^2(x)\\right)dx$</p>\n\n<p>Perform the change of variable,</p>\n\n<p>$y=\\tan x$,</p>\n\n<p>$\\begin{align}I&amp;=\\displaystyle \\int_0^1 \\dfrac{\\arctan\\left(\\tfrac{1}{x^2}\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\int_0^1 \\dfrac{\\tfrac{\\pi}{2}-\\arctan\\left(x^2\\right)}{1+x^2}dx\\\\\n&amp;=\\displaystyle \\dfrac{\\pi^2}{8}-\\int_0^1 \\dfrac{\\arctan\\left(x^2\\right)}{1+x^2}dx\n\\end{align}$</p>\n" } ]
Question: Help to Prove that $\int_{0}^{\pi\over 4}\arctan{(\cot^2{x})}\mathrm dx={2\pi^2-\ln^2({3+2\sqrt{2})}\over 16}$ Answer 1: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 2: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 3: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 4: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 5: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 6: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 7: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 8: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 9: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 10: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 11: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 12: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 13: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 14: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 15: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 16: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 17: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 18: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 19: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 20: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 21: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 22: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 23: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 24: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 25: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 26: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 27: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 28: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 29: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 30: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 31: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 32: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 33: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 34: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 35: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 36: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p> Answer 37: <p>We first write $I$ as</p> <p>$$ I= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \left( \arctan(\cot^2 x) - \arctan(1) \right) \, dx. $$</p> <p>Now using addition formulas for $\arctan$ and $\cos$, we have</p> <p>$$ \arctan(\cot^2 x) - \arctan(1) = \arctan\left(\frac{\cot^2 x - 1}{\cot^2 x + 1} \right) = \arctan(\cos 2x). $$</p> <p>Consequently we have</p> <p>\begin{align*} I &amp;= \frac{\pi^2}{16} + \int_{0}^{\frac{\pi}{4}} \arctan(\cos 2x) \, dx \\ &amp;= \frac{\pi^2}{16} + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta, \end{align*}</p> <p>where the last line follows from the substitution $\theta = \frac{\pi}{2} - 2x$. The last integral can be computed in terms of the <a href="http://mathworld.wolfram.com/LegendresChi-Function.html" rel="nofollow noreferrer"><em>Legendre chi function $\chi_2$</em></a>:</p> <p>$$ \int_{0}^{\frac{\pi}{2}} \arctan(\sin \theta) \, d\theta = 2\chi_2(\sqrt{2}-1). \tag{1} $$</p> <p>For a proof of $\text{(1)}$, see <a href="https://math.stackexchange.com/questions/555882/integral-int-01-frac-arctan2x-sqrt1-x2-mathrm-dx">my previous answer</a> for instance. There are only a handful of known special values of $\chi_2$, but thankfully</p> <p>$$\chi_2(\sqrt{2}-1) = \frac{\pi^2}{16} - \frac{1}{4}\log^2(\sqrt{2}+1) \tag{2} $$</p> <p>is one of them. Summarizing, we have</p> <p>$$ I = \frac{\pi^2}{8} - \frac{1}{4}\log^2(\sqrt{2}+1), $$</p> <p>which coincides with the proposed answer.</p> <hr> <p><strong>Addendum.</strong> The identity $\text{(2)}$ follows by plugging $x = \sqrt{2}-1$ to the identity</p> <p>$$ \chi_2\left(\frac{1-x}{1+x}\right) + \chi_2(x) = \frac{\pi^2}{8} - \frac{1}{2}\log x \log\left(\frac{1-x}{1+x}\right), $$</p> <p>which can be easily checked by differentiating both sides.</p> Answer 38: <p>For the evaluation of $\displaystyle J=\int_0^{\tfrac{\pi}{2}}\arctan(\sin x)dx$</p> <p>Perform the change of variable $y=\sin x$,</p> <p>$\displaystyle J=\int_0^1 \dfrac{\arctan x}{\sqrt{1-x^2}}dx$</p> <p>Perform the change of variable $y=\sqrt{\dfrac{1-x}{1+x}}$,</p> <p>$\begin{align}\displaystyle J&amp;=2\int_0^1 \dfrac{\arctan\left(\tfrac{1-x^2}{1+x^2}\right)}{1+x^2}dx\\ &amp;=2\int_0^1 \dfrac{\arctan(1)}{1+x^2}dx-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ &amp;=\dfrac{\pi^2}{8}-2\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx\\ \end{align}$</p> <p>$\begin{align} \displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx&amp;=\Big[\arctan x\arctan(x^2)\Big]_0^1-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ &amp;=\dfrac{\pi^2}{16}-\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \end{align}$</p> <p>Since,</p> <p>$\displaystyle \arctan x=\int_0^1 \dfrac{x}{1+t^2x^2}dx$ </p> <p>then,</p> <p>$\begin{align} \displaystyle K&amp;=\int_0^1 \dfrac{2x\arctan x}{1+x^4}dx\\ \displaystyle &amp;=\int_0^1\int_0^1 \dfrac{2x^2}{(1+t^2x^2)(1+x^4)}dtdx\\ \displaystyle &amp;=\int_0^1\int_0^1 \left(\dfrac{2t^2}{(1+t^4)(1+x^4)}+\dfrac{2x^2}{(1+x^4)(1+t^4)}-\dfrac{2t^2}{(1+t^4)(1+t^2x^2}\right)dtdx\\ &amp;=\displaystyle 4\left(\int_0^1 \dfrac{t^2}{1+t^4}dt\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)-K \end{align}$</p> <p>Therefore,</p> <p>$\displaystyle K=2\left(\int_0^1 \dfrac{x^2}{1+x^4}dx\right)\left(\int_0^1 \dfrac{1}{1+x^4}dx\right)$</p> <p>Since,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{x^2}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2-\sqrt{2}x+1}{x^2+\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi+\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>and,</p> <p>$\begin{align}\displaystyle \int_0^1 \dfrac{1}{1+x^4}dx&amp;=\left[\dfrac{1}{4\sqrt{2}}\ln\left(\dfrac{x^2+\sqrt{2}x+1}{x^2-\sqrt{2}x+1}\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x+1\right)+\dfrac{1}{2\sqrt{2}}\arctan\left(\sqrt{2}x-1\right)\right]_0^1\\ &amp;=\dfrac{1}{4\sqrt{2}}\Big(\pi-\ln\left(3-2\sqrt{2}\right)\Big) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{K=\displaystyle \dfrac{\pi^2}{16}-\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle\int_0^1 \dfrac{\arctan(x^2)}{1+x^2}dx=\dfrac{1}{16}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p>Therefore,</p> <p>$\boxed{\displaystyle J=\dfrac{\pi^2}{8}-\dfrac{1}{8}\Big(\ln\left(3-2\sqrt{2}\right)\Big)^2}$</p> <p><strong>Addendum</strong>:</p> <p>$\displaystyle I=\int_0^{\tfrac{\pi}{4}} \arctan\left(\cot^2(x)\right)dx$</p> <p>Perform the change of variable,</p> <p>$y=\tan x$,</p> <p>$\begin{align}I&amp;=\displaystyle \int_0^1 \dfrac{\arctan\left(\tfrac{1}{x^2}\right)}{1+x^2}dx\\ &amp;=\displaystyle \int_0^1 \dfrac{\tfrac{\pi}{2}-\arctan\left(x^2\right)}{1+x^2}dx\\ &amp;=\displaystyle \dfrac{\pi^2}{8}-\int_0^1 \dfrac{\arctan\left(x^2\right)}{1+x^2}dx \end{align}$</p>
Why aren&#39;t the functions $f(x) = \frac{x-1}{x-1}$ and $f(x) = 1$ the same?
2,093,466
10
https://math.stackexchange.com/questions/2093466/why-arent-the-functions-fx-fracx-1x-1-and-fx-1-the-same
<p>I understand that division by zero isn't allowed, but we merely just multiplied $f(x) = 1$ by $\frac{x-1}{x-1}$ to get $f(x) = \frac{x-1}{x-1}$ and $a\cdot 1 = 1\cdot a = a$ so they're the <strong>same function</strong> but with <strong>different domain</strong> how is this possible?</p> <p><em>Or in other words</em> why don't we simplify $f(x) = \frac{x-1}{x-1}$ to $f(x) = 1$ before plotting the points. Is it just defined this way or is there a particular reason ?</p> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p>
[ { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" }, { "answer_id": 2093470, "score": 19, "text": "<p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since \"$\\tfrac{0}{0}$\" is undefined), while the other one is simply $1$ at $x=1$.</p>\n\n<blockquote>\n <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p>\n</blockquote>\n\n<p>You can multiply by any fraction $\\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\\tfrac{x-1}{x-1}$ is fine, but only valid for $x \\ne 1$.</p>\n\n<blockquote>\n <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points.\n Is it just defined this way or is there a particular reason ?</p>\n</blockquote>\n\n<p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\\tfrac{ka}{kb}$ to $\\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \\ne 0$ since you can't divide by $0$. So \"simplifying\" $\\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \\ne 0$ so for $x \\ne 1$.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\\mathbb{R}$ and the domain of \n $f(x) = \\frac{x-1}{x-1}$ is $\\mathbb{R}$ except $1$.</p>\n</blockquote>\n\n<p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\\mathbb{R}$ is taken as codomain). Then look at:\n$$f : \\mathbb{R} \\to \\mathbb{R} : x \\mapsto f(x) = 1$$\nand\n$$g : \\mathbb{R} \\setminus \\left\\{ 1 \\right\\} \\to \\mathbb{R} : x \\mapsto g(x) = \\frac{x-1}{x-1}$$\nThe functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p>\n" }, { "answer_id": 2093480, "score": 8, "text": "<p><strong>Question</strong>: What is a function? </p>\n\n<p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p>\n\n<p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p>\n\n<p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same \"maps\" otherwise they are not.</p>\n" }, { "answer_id": 2093494, "score": 0, "text": "<p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>\n" } ]
Question: Why aren&#39;t the functions $f(x) = \frac{x-1}{x-1}$ and $f(x) = 1$ the same? Answer 1: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 2: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 3: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 4: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 5: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 6: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 7: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 8: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 9: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 10: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 11: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 12: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 13: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 14: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 15: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 16: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 17: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 18: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 19: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 20: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 21: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 22: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 23: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 24: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 25: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 26: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 27: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 28: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 29: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 30: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 31: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 32: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 33: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 34: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 35: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 36: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 37: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 38: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 39: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 40: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 41: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 42: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 43: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 44: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 45: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 46: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 47: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 48: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 49: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 50: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 51: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 52: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 53: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 54: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p> Answer 55: <p>They are the same <em>almost everywhere</em>. But clearly one of them does not exist for $x=1$ (since "$\tfrac{0}{0}$" is undefined), while the other one is simply $1$ at $x=1$.</p> <blockquote> <p>I understand that division by zero isn't allowed, but we merely just multiplied f(x) = 1 by (x-1)/(x-1)</p> </blockquote> <p>You can multiply by any fraction $\tfrac{a}{a}$; but not when $a=0$ because the fraction you want to multiply with, isn't even defined then. So multiplying by $\tfrac{x-1}{x-1}$ is fine, but only valid for $x \ne 1$.</p> <blockquote> <p>why don't we simplify f(x) = (x-1)/(x-1) to f(x) = 1 before plotting the points. Is it just defined this way or is there a particular reason ?</p> </blockquote> <p>You can simplify, but recall that simplifying is actually dividing numerator and denominator by the same number: you can simplify $\tfrac{ka}{kb}$ to $\tfrac{a}{b}$ by dividing by $k$. But also then: this only works for $k \ne 0$ since you can't divide by $0$. So "simplifying" $\tfrac{x-1}{x-1}$ to $1$ is fine, for $x-1 \ne 0$ so for $x \ne 1$.</p> <hr> <blockquote> <p><strong>Note:</strong> my book says the domain of $f(x) = 1$ is $\mathbb{R}$ and the domain of $f(x) = \frac{x-1}{x-1}$ is $\mathbb{R}$ except $1$.</p> </blockquote> <p>Technically, the domain is a part of the function: it should be given (as well as the codomain). It is very common though that when unspecified, in the context of real-valued functions of a real variable, we assume the 'maximal domain' is intended (and $\mathbb{R}$ is taken as codomain). Then look at: $$f : \mathbb{R} \to \mathbb{R} : x \mapsto f(x) = 1$$ and $$g : \mathbb{R} \setminus \left\{ 1 \right\} \to \mathbb{R} : x \mapsto g(x) = \frac{x-1}{x-1}$$ The functions $f$ and $g$ are different, but $f(x) = g(x)=1$ for all $x$ except when $x=1$, where $g$ is undefined.</p> Answer 56: <p><strong>Question</strong>: What is a function? </p> <p><strong>Answer</strong>: Maybe simply said it is a map (receipe), $f(x)$, that projects some elements, $x$, contained in a specifically defined set, Domain $D$, into another set, Range $R$.</p> <p><strong>Discussion</strong>: Hence when defining a function one must define the Domain as well as the functional form. Otherwise the function is not defined.</p> <p><strong>Conclusion</strong>: If two funtions have the same domain and the same receipe then they are the same "maps" otherwise they are not.</p> Answer 57: <p>$f(x)=(x-1)/(x-1)$ does not have a value when $x=1$, different thing happens to $f(x)=1$</p>
Cantor&#39;s proof and Zorn&#39;s lemma
2,095,044
10
https://math.stackexchange.com/questions/2095044/cantors-proof-and-zorns-lemma
<p>In Cantor's proof, that the real numbers are uncountable he uses the fact that, we are able to always construct an element which doesn't belong to the given list of numbers.</p> <p>What if we add those numbers one after another and kept doing that.Notice that the lists create partially ordered set with inclusion as comparator. Any chain has an upper bound which is the union of all lists in that chain. So according to Zorn's lemma there should exist a maximal element(list), which is the list that we are looking for.</p> <p>Obviously there is a problem with my reasoning, but I don't understand where is it. Can anyone show me the pitfall of my reasoning?</p>
[ { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" }, { "answer_id": 2095050, "score": 21, "text": "<p>Your partial order $\\mathbb{P}$ is (something like) countable subsets of $\\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\\mathbb{P}$ has a maximal element, then $\\mathbb{R}$ is countable. </p>\n\n<p>So does $\\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\\mathbb{P}$ had an upper bound. Here you write</p>\n\n<blockquote>\n <p>Any chain has an upper bound which is the union of all lists in that chain.</p>\n</blockquote>\n\n<p><em>But this isn't true</em>! The union of a chain in $\\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\\mathbb{P}$ need not be an element of $\\mathbb{P}$.</p>\n\n<p>The union of a <em>countable</em> chain in $\\mathbb{P}$ will indeed be an element of $\\mathbb{P}$ - but not every chain in $\\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p>\n" }, { "answer_id": 2095064, "score": 7, "text": "<p>There are two problems here.</p>\n\n<ol>\n<li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\\Bbb N$... so how you're going to extend it?</p>\n\n<p>Once you only work with countable sets, then your argument fails for the same reason that $\\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p>\n\n<p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li>\n<li><p>Even if you somehow could extend the diagonalization argument beyond $\\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of \"small list\" without an upper bound.</p></li>\n</ol>\n" } ]
Question: Cantor&#39;s proof and Zorn&#39;s lemma Answer 1: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 2: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 3: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 4: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 5: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 6: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 7: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 8: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 9: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 10: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 11: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 12: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 13: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 14: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 15: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 16: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 17: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 18: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 19: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 20: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 21: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 22: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 23: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 24: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 25: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 26: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 27: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 28: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 29: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 30: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 31: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 32: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 33: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 34: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 35: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 36: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol> Answer 37: <p>Your partial order $\mathbb{P}$ is (something like) countable subsets of $\mathbb{R}$, ordered by inclusion. You are right that, <em>if</em> $\mathbb{P}$ has a maximal element, then $\mathbb{R}$ is countable. </p> <p>So does $\mathbb{P}$ have a maximal element? Well, by Zorn's Lemma it would ... <em>if</em> every chain in $\mathbb{P}$ had an upper bound. Here you write</p> <blockquote> <p>Any chain has an upper bound which is the union of all lists in that chain.</p> </blockquote> <p><em>But this isn't true</em>! The union of a chain in $\mathbb{P}$ <em>is</em> a set of reals, but not necessarily a <em>countable</em> set of reals - and $\mathbb{P}$ only contains <em>countable</em> sets of reals. So the union of a chain in $\mathbb{P}$ need not be an element of $\mathbb{P}$.</p> <p>The union of a <em>countable</em> chain in $\mathbb{P}$ will indeed be an element of $\mathbb{P}$ - but not every chain in $\mathbb{P}$ is countable, and Zorn's Lemma needs <em>every</em> chain to have an upper bound in the poset.</p> Answer 38: <p>There are two problems here.</p> <ol> <li><p>You think about countable subsets, because the diagonal argument is working only on countable sets. But a countable set is <em>not</em> a list, which is the enumeration as well. Increasing the list means working with countable ordinals, rather than functions from $\Bbb N$, otherwise you cannot extend anything because a list is a function with domain $\Bbb N$... so how you're going to extend it?</p> <p>Once you only work with countable sets, then your argument fails for the same reason that $\Bbb R$ is not finite. Look at the set of finite sets of reals, there every finite chain has an upper bound, but the <em>every</em> chain. There are certainly infinite chains without upper bounds.</p> <p>You could ask what would happen if we require that every chain <em>is</em> countable <em>and</em> has an upper bound. Well, this is certainly consistent without the axiom of choice. But this can only happen in models where the axiom of choice fails, and therefore Zorn's lemma is no longer applicable.</p></li> <li><p>Even if you somehow could extend the diagonalization argument beyond $\Bbb N$ and work with the ordinals and not the natural numbers (this is doable under things like Martin's Axiom, which can be thought of as a form of extended diagonalization), you <em>still</em> have a chain without an upper bound: well-order the reals in the least order type possible, and look at the initial segments of this order. These define a chain of "small list" without an upper bound.</p></li> </ol>
Proof that $x^4+1$ is irreducible over $\mathbb{Q}$
2,096,602
10
https://math.stackexchange.com/questions/2096602/proof-that-x41-is-irreducible-over-mathbbq
<p>I've just started to work myself into the topic of irreducibility of polynomials and I would like to show the following:</p> <blockquote> <p>Prove that <span class="math-container">$x^4+1$</span> is irreducible over <span class="math-container">$\mathbb{Q}$</span>.</p> </blockquote> <p>I've managed to come up with a proof that seems reasonable to me but I'm not sure if it is actually correct. Could someone tell me if I made any beginner's mistakes?</p> <p>My proof: Clearly <span class="math-container">$x^4+1$</span> has no root in <span class="math-container">$\mathbb{Q}$</span>, so it can only be decomposed into two polynomials of degree <span class="math-container">$2$</span>. Therefore there must be <span class="math-container">$a, b \in \mathbb{Q}$</span> s.t. <span class="math-container">$x^4+1=(x^2+ax+1)(x^2+bx+1)$</span>. If we multiply this out we get <span class="math-container">$x^4+(a+b)x^3+(ab+2)x^2+(a+b)x+1$</span>. If we now equate the coefficients we get that <span class="math-container">$a+b = 0$</span> and therefore <span class="math-container">$a = -b$</span> and that <span class="math-container">$ab+2=0$</span> and therefore that <span class="math-container">$a^2=2$</span>, which is not possible in <span class="math-container">$\mathbb{Q}$</span>.</p> <p>Any feedback is appreciated!</p>
[ { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" }, { "answer_id": 2096676, "score": 13, "text": "<p>For this polynomial, it can simplified, since we know its factorisation over $\\mathbf R$:\n$$x^4+1=(x^2+1)^2-2x^2=(x^2-\\sqrt2x+1)(x^2+\\sqrt2x+1),$$\nand this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\\mathbf Q$.</p>\n" }, { "answer_id": 2096634, "score": 8, "text": "<p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p>\n\n<p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p>\n\n<p>We get </p>\n\n<p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p>\n\n<p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p>\n\n<p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \\pm 1$). </p>\n\n<p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p>\n\n<p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p>\n" }, { "answer_id": 2096630, "score": 6, "text": "<p>Your proof is almost fine - you just have to write more generally\n$$\nx^4+1=(x^2+ax+b)(x^2+cx+d),\n$$\nwhere we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p>\n\n<p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to\n$$\nf(x+1)=x^4+4x^3+6x^2+4x+2.\n$$</p>\n" } ]
Question: Proof that $x^4+1$ is irreducible over $\mathbb{Q}$ Answer 1: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 2: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 3: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 4: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 5: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 6: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 7: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 8: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 9: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 10: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 11: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 12: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 13: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 14: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 15: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 16: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 17: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 18: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 19: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 20: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 21: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 22: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 23: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 24: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 25: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 26: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 27: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 28: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 29: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 30: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 31: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 32: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 33: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 34: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 35: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 36: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 37: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 38: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 39: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 40: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 41: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 42: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 43: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 44: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 45: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 46: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 47: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 48: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 49: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 50: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 51: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 52: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 53: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 54: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p> Answer 55: <p>For this polynomial, it can simplified, since we know its factorisation over $\mathbf R$: $$x^4+1=(x^2+1)^2-2x^2=(x^2-\sqrt2x+1)(x^2+\sqrt2x+1),$$ and this factorisation does not have rational coefficients. So by the uniqueness of factorisation, it cannot have another factorisation over $\mathbf Q$.</p> Answer 56: <p>Your approach in principle works fine you just tried to take a short-cut that is not available, yet is not really needed. Instead: </p> <p>$$(x^4+1) = (x^2+ax+c)(x^2+bx+d)$$</p> <p>We get </p> <p>$$x^4+(a+b)x^3+(ab+c+d)x^2+(ad+bc)x+cd.$$</p> <p>You get $cd=1$ and thus $c=d^{-1}$. You still get $a=-b$ (from the third power), and then $ad +bc$ becomes $ad-ac$ and further $a(c^{-1}-c)$. </p> <p>Thus, $a=0$ or $c= c^{-1}$ (that is, $c = \pm 1$). </p> <p>In the former case, you get $c+ c^{-1}= c+d = 0$ which is impossible. In the latter case, you already dealt with $c=d=1$ and $c= d= -1$ is essentially the same, it yields the impossible $a^2 = -2$. </p> <p>That said, the Eisenstein approach mentioned in a comment is more elegant, if you have it available. </p> Answer 57: <p>Your proof is almost fine - you just have to write more generally $$ x^4+1=(x^2+ax+b)(x^2+cx+d), $$ where we obtain of course $bd=1$, which over the integers only leaves the possibilities $b=d=1$, or $b=d=-1$. In both cases, the associated Diophantine equations have no solution, and this is easy to see (we obtain $c=−a$, and $a^2 =b+d=±2$, which is not possible).</p> <p>Furthermore, the polynomial $x^4+1$ is the $8$-th cyclotomic polynomial, and hence irreducible by the theorem of your algebra lecture. Of course, there are several other methods to prove this directly, e.g., writing $f(x)=x^4+1$, and applying Eisenstein to $$ f(x+1)=x^4+4x^3+6x^2+4x+2. $$</p>
Intuition behind Transition Kernels (without use of Markov Chains)
2,096,875
10
https://math.stackexchange.com/questions/2096875/intuition-behind-transition-kernels-without-use-of-markov-chains
<p>I am studying Probability and a measure theory as a part of my masters course. I am very new to measure theory. I do understand the basic concepts but the one topic I am really struggling with Transition Kernels. </p> <p>I am using the book "Probability and Stochastics" by Erhan Cinlar for reference.</p> <p>I do not quite understand the idea behind Transition Kernels. In all the research I did on my own led me to the topic of Markov Chains and as of now I have very little knowledge of Markov Chains. Is there an easier way to define them? The below definition given on the book, is a little abstruse and so I would like to gain some intuition about it.</p> <p>Given measure spaces (E,<strong><em>E</em></strong>) and (F,<strong><em>F</em></strong>) we define measure kernel K:E×<strong><em>F</em></strong>→[0,∞] such that:</p> <p>i) x → K(x,B) is <strong><em>E</em></strong> measurable for every set B in <strong><em>F</em></strong> </p> <p>ii) B → K(x,B) is a measure on (F,<strong><em>F</em></strong>) for every x in E</p> <p>Any help is much appreciated!</p> <p>Thanks in advance.</p>
[ { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" }, { "answer_id": 2096910, "score": 15, "text": "<p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p>\n\n<hr>\n\n<p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\\{M(t)\\}_{t=0}^{\\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that \n$$P_{ij}=P[M(t+1)=j|M(t)=i] \\quad \\mbox{for all $i,j\\in S$} $$</p>\n\n<p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have\n$$K(x,B)=P[M(t+1)\\in B|M(t)=x] $$\nfor all $x\\in S$ and all (measurable) subsets $B\\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p>\n\n<p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \\mod 1$ (where the $\\mod 1$ operation wraps the number back to the unit interval), where $\\{U_t\\}_{t=0}^{\\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So:\n\\begin{align} \nK(1/3, [2/3,1]) &amp;= 0\\\\\nK(1/3, [1/3, 1]) &amp;= 1\\\\\nK(1/3, [1/3, 1/2]) &amp;=1/2\n\\end{align}\nand so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p>\n\n<hr>\n\n<p>Assuming the Markov states $\\vec{M}(t)$ are vectors in $\\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function \n$$F(\\vec{y}|\\vec{x}) = P[\\vec{M}(t+1) \\leq \\vec{y} | \\vec{M}(t)=\\vec{x}]$$ \ndefined for all vectors $\\vec{x}, \\vec{y} \\in \\mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p>\n" }, { "answer_id": 3066231, "score": 2, "text": "<p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The \"naive\" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class=\"math-container\">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class=\"math-container\">$J$</span>, starting from the current state <span class=\"math-container\">$i$</span>: <span class=\"math-container\">$\\mathsf{Pr}(X_{n+1} \\in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p>\n\n<p>Let's consider a kernel <span class=\"math-container\">$K : (E,\\mathcal{E}) \\to (E, \\mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class=\"math-container\">$x\\in E$</span> and <span class=\"math-container\">$Y\\in \\mathcal{E}$</span>, <span class=\"math-container\">$K(x, Y)$</span> is the probability of arriving in some state in <span class=\"math-container\">$Y$</span> from the current state <span class=\"math-container\">$x$</span>. Then the two conditions are natural. For example, to require that <span class=\"math-container\">$\\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class=\"math-container\">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class=\"math-container\">$1$</span>. For some very simple examples one can refer to <a href=\"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html\" rel=\"nofollow noreferrer\">Markov kernel on IPFS.io</a>.</p>\n\n<p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>\n" } ]
Question: Intuition behind Transition Kernels (without use of Markov Chains) Answer 1: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 2: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 3: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 4: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 5: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 6: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 7: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 8: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 9: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 10: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 11: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 12: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 13: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 14: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 15: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 16: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 17: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 18: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 19: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 20: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 21: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 22: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 23: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 24: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 25: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 26: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 27: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 28: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 29: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 30: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 31: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 32: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 33: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 34: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 35: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 36: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p> Answer 37: <p>I would recommend first studying basic discrete time Markov chains without measure theory, as in my comment above. </p> <hr> <p>To give a small amount of help in your current predicament: Basic discrete time Markov chains $\{M(t)\}_{t=0}^{\infty}$ over a finite or countably infinite state space $S$ are defined by a transition probability matrix $P=(P_{ij})$ such that $$P_{ij}=P[M(t+1)=j|M(t)=i] \quad \mbox{for all $i,j\in S$} $$</p> <p>The crazy kernel stuff generalizes $P$ to uncountably infinite state spaces $S$. We have $$K(x,B)=P[M(t+1)\in B|M(t)=x] $$ for all $x\in S$ and all (measurable) subsets $B\subseteq S$. So $K$ defines a probability measure over the next-state, given the current state.</p> <p>An example might be a state space $S$ given by the unit interval $[0,1]$. Given $M(t)=x$, define the next-state by $M(t+1)= (x + U_t) \mod 1$ (where the $\mod 1$ operation wraps the number back to the unit interval), where $\{U_t\}_{t=0}^{\infty}$ are i.i.d. random variables uniformly distributed over $[0,1/3]$. So, for example, given $x=1/3$, the kernel $K(1/3,\cdot)$ formally defines the next state according to a uniform distribution over $[1/3, 2/3]$. So: \begin{align} K(1/3, [2/3,1]) &amp;= 0\\ K(1/3, [1/3, 1]) &amp;= 1\\ K(1/3, [1/3, 1/2]) &amp;=1/2 \end{align} and so on. Similarly, you can check that $K(5/6, [0,1/2]) = 1/2$. </p> <hr> <p>Assuming the Markov states $\vec{M}(t)$ are vectors in $\mathbb{R}^n$, I personally would avoid the “kernel” stuff. I would just define the next-state transitions in terms of a conditional CDF function $$F(\vec{y}|\vec{x}) = P[\vec{M}(t+1) \leq \vec{y} | \vec{M}(t)=\vec{x}]$$ defined for all vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$. And conditional PDFs can be defined as derivatives of the above (when differentiable). That is equivalent to specifying a kernel, yet it seems simpler. But that is just me. </p> Answer 38: <p>I am also studying some basic Markov process on my own, and here is some of my basic understanding. The "naive" Markov process dealing with finite states can be well modeled by the transition matrices. But when one wants to generalize to infinite states and more general probabilisitic settings using measure theory, the kernel is what one needs. And instead of specifying the probability <span class="math-container">$\mathsf{Pr}(X_{n+1} = j| X_n = i)$</span> (like the element <span class="math-container">$P_{ij}$</span> in the transition matrix), here one specifies (informally) the possibility of being in some state among <span class="math-container">$J$</span>, starting from the current state <span class="math-container">$i$</span>: <span class="math-container">$\mathsf{Pr}(X_{n+1} \in J| X_n = i)$</span>. And this leads to the definition of a kernel.</p> <p>Let's consider a kernel <span class="math-container">$K : (E,\mathcal{E}) \to (E, \mathcal{E})$</span> (note that this kernel is on a single measurable space). Intuitively given <span class="math-container">$x\in E$</span> and <span class="math-container">$Y\in \mathcal{E}$</span>, <span class="math-container">$K(x, Y)$</span> is the probability of arriving in some state in <span class="math-container">$Y$</span> from the current state <span class="math-container">$x$</span>. Then the two conditions are natural. For example, to require that <span class="math-container">$\lambda Y. K(x, Y)$</span> is a probability measure since starting from <span class="math-container">$x$</span> is more or less the same as the requirement that in the transition matrix every row sum up to <span class="math-container">$1$</span>. For some very simple examples one can refer to <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Markov_kernel.html" rel="nofollow noreferrer">Markov kernel on IPFS.io</a>.</p> <p>By the way I am curious to see some simple examples where the source and target of the kernel are distinct. Anyone who has an example at hand or a reference, plz let me know!</p>
Computing the cohomology ring of the orientable genus $g$ surface by considering a quotient map.
2,099,596
10
https://math.stackexchange.com/questions/2099596/computing-the-cohomology-ring-of-the-orientable-genus-g-surface-by-considering
<p>I'm attempting to do Hatcher Chapter 3.2, Exercise 1.</p> <p>Let $M_g$ be the genus $g$ orientable surface, and let $q$ be the quotient map from $M_g$ to the wedge product of $g$ tori obtained by collapsing a subspace of $M_g$ homeomorphic to a $g$-times punctured sphere to a point.</p> <p>We know that $H^*(T^2) = \wedge[\alpha,\beta]$, so $H^*(\vee_i T^2) = \oplus_{i = 1}^g \wedge[\alpha_i,\beta_i]$. The map $q$ induces a ring homomorphism $q^*: H^*(\vee_i T^2) \to H^*(M_g)$. Choosing appropriate representatives for the cohomology classes $\alpha_i$ and $\beta_i$ (which I do not denote any differently), we can see that $q^*(\alpha_i)$ and $q^*(\beta_i)$ are represented by $\alpha_i \circ q$ and $\beta_i \circ q$ respectively, and the ring homomorphism preserves the exterior algebra relations.</p> <p>My only problem is determining if there are other relations in $H^*(M_g)$ because I don't believe $q^*$ is an isomorphism. Is there some way to determine these without directly computing the cup product structure of $M_g$ via a gluing diagram?</p>
[ { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" }, { "answer_id": 4036401, "score": 7, "text": "<p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class=\"math-container\">$H^*(M_g)$</span> comes from the cup product <span class=\"math-container\">$H^1(M_g) \\times H^1(M_g) \\to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class=\"math-container\">$X := \\bigvee_{i = 1}^g T_i$</span> with each <span class=\"math-container\">$T_i$</span> a torus, let <span class=\"math-container\">$A$</span> denote the subspace of <span class=\"math-container\">$M_g$</span> equal to a <span class=\"math-container\">$g$</span>-punctured sphere, let <span class=\"math-container\">$i : A \\to M_g$</span> denote inclusion, and let <span class=\"math-container\">$q: M_g \\to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH^1(X) \\times H^1(X) @&gt;{\\cup}&gt;&gt; H^2(X) \\\\\n@V{q^* \\times q^*} VV @V{q^*}VV \\\\\nH^1(M_g) \\times H_1(M_g) @&gt;{\\cup}&gt;&gt; H^2(M_g)\n\\end{CD}\n$$</span>\ncommutes to get at the cup product structure on <span class=\"math-container\">$M_g$</span> from (i) the (known) cup product structure on <span class=\"math-container\">$X$</span> and (ii) the action of <span class=\"math-container\">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p>\n<p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p>\n<p>We consider a piece\n<span class=\"math-container\">$$\n\\cdots \\rightarrow H_1(A) \\xrightarrow{i_*} H_1(M_g) \\xrightarrow{q_*} H_1(X) \\xrightarrow{\\delta} H_0(A) \\xrightarrow{i_*} H_0(M_g) \\to \\cdots\n$$</span>\nof the long exact sequence of homology induced by the pair <span class=\"math-container\">$(M_g, A)$</span>. Regard <span class=\"math-container\">$H_1$</span> as the abelianisation of <span class=\"math-container\">$\\pi_1$</span> to see that the map <span class=\"math-container\">$i_* : H_1(A) \\to H_1(M_g)$</span> is zero (because loops in the subspace <span class=\"math-container\">$A$</span> lie in the commutator subgroup of <span class=\"math-container\">$\\pi_1(M_g)$</span>). Note also that <span class=\"math-container\">$i_* : H_0(A) \\to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class=\"math-container\">$\\delta = 0$</span>. Thus, <span class=\"math-container\">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology.</p>\n<p>Understanding <span class=\"math-container\">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class=\"math-container\">$H_2$</span> in terms of simplicial homology. Let <span class=\"math-container\">$q_i : X \\to T_i$</span> denote the quotient map which collapses the wedge sum <span class=\"math-container\">$X$</span> to the <span class=\"math-container\">$i$</span>-th torus in the sum. Then we have a commutative diagram\n<span class=\"math-container\">$$\n\\require{AMScd}\n\\begin{CD}\nH_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\\\\n@V{\\cong} VV @V{\\cong}VV @V{\\cong}VV \\\\\n\\mathbb{Z} @&gt;&gt;&gt; \\mathbb{Z}^g @&gt;&gt;&gt; \\mathbb{Z}.\n\\end{CD}\n$$</span>\nLetting <span class=\"math-container\">$\\sigma : \\Delta^2 \\to M_g$</span> represent the generating class in <span class=\"math-container\">$H_2(M_g) \\cong \\mathbb{Z}$</span>, that <span class=\"math-container\">$\\sigma$</span> is not a boundary in <span class=\"math-container\">$H_2(M_g)$</span> implies that <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> is itself not a boundary in <span class=\"math-container\">$H_2(T_i) \\cong \\mathbb{Z}$</span>. In particular, <span class=\"math-container\">$q_i \\circ q \\circ \\sigma$</span> generates <span class=\"math-container\">$H_2(T_i)$</span> so we see that <span class=\"math-container\">$(q_i)_* \\circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\mathbb{Z}^g$</span> comes from the isomorphism <span class=\"math-container\">$H_2(X) \\cong \\bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class=\"math-container\">$T_i \\hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class=\"math-container\">$\\sigma_i : \\Delta^2 \\to T_i$</span> represents a generator of <span class=\"math-container\">$H_2(T_i)$</span> then the map <span class=\"math-container\">$(q_i)_*$</span> sends <span class=\"math-container\">$(0, \\ldots, 0, \\sigma_i, 0, \\ldots, 0) \\in H_2(X)$</span> to <span class=\"math-container\">$\\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class=\"math-container\">$\\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class=\"math-container\">$\\{e_1, \\ldots, e_g\\}$</span>, then <span class=\"math-container\">$(q_i)_*$</span> acts as <span class=\"math-container\">$\\varepsilon_i$</span> for <span class=\"math-container\">$\\varepsilon_i(e_j) := \\delta_{ij}$</span>. We conclude that for each <span class=\"math-container\">$i$</span> we have <span class=\"math-container\">$\\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class=\"math-container\">$q_*(1) = (1, \\ldots, 1) \\in \\mathbb{Z}^g \\cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class=\"math-container\">$q^* : H^2(X) \\to H^2(M_g)$</span> maps each of the <span class=\"math-container\">$g$</span> generators of <span class=\"math-container\">$H^2(X)$</span> to the same generator of <span class=\"math-container\">$H^2(M_g)$</span>. In terms of the isomorphisms <span class=\"math-container\">$H^2(X) \\cong \\mathbb{Z}^g$</span> and <span class=\"math-container\">$H^2(M_g) \\cong \\mathbb{Z}$</span>, we have concluded that <span class=\"math-container\">$q^*(e_i) = 1$</span> for all <span class=\"math-container\">$i$</span>.</p>\n<p>With a precise understanding of the action of <span class=\"math-container\">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class=\"math-container\">$H^1(X)$</span> have generators <span class=\"math-container\">$\\{\\alpha_i, \\beta_i\\}_{i = 1}^g$</span> and <span class=\"math-container\">$H^2(X)$</span> have generators <span class=\"math-container\">$\\{\\gamma_i\\}_{i = 1}^g$</span> with <span class=\"math-container\">$\\alpha_i \\beta_i = \\gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class=\"math-container\">$q^*$</span> induces an isomorphism on first cohomology implies that <span class=\"math-container\">$H_1(M_g)$</span> has generators <span class=\"math-container\">$\\alpha_i' := q^*(\\alpha_i)$</span> and <span class=\"math-container\">$\\beta_i' := q^*(\\beta_i)$</span>. And the action of <span class=\"math-container\">$q^*$</span> on <span class=\"math-container\">$H^2$</span> means that <span class=\"math-container\">$H^2(M_g)$</span> has generator <span class=\"math-container\">$\\gamma' := q^*(\\gamma_1) = \\cdots = q^*(\\gamma_g)$</span>. By our very first commuting square, we conclude that</p>\n<p><strong>Theorem:</strong> The cohomology ring <span class=\"math-container\">$\\tilde{H}^*(M_g)$</span> is generated by <span class=\"math-container\">$\\{\\alpha_i', \\beta_i'\\}_{i=1}^g \\cup \\{\\gamma'\\}$</span> with <span class=\"math-container\">$\\alpha_i'$</span> and <span class=\"math-container\">$\\beta_i'$</span> of weight one, <span class=\"math-container\">$\\gamma'$</span> of weight two, and nonzero cup product relations <span class=\"math-container\">$\\alpha_i' \\beta_i' = \\gamma'$</span> for all <span class=\"math-container\">$i$</span> (all other cup products of generators are zero).</p>\n" }, { "answer_id": 2290031, "score": 4, "text": "<p>The statement $H^*(\\bigvee_i T^2) = \\bigoplus_i \\bigwedge[\\alpha_i, \\beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\\mathbb{Z} \\cong \\bigwedge^0 [\\alpha_i , \\beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p>\n\n<p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p>\n\n<p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\\mathbb{Z}$, $H^1(M_g)=\\mathbb{Z}^{2g}$, $H^2(M_g)=\\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \\text{Hom}(H_k(M_g), \\mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p>\n\n<p>Because the cup product are maps $H^k(M_g) \\times H^l(M_g) \\to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \\times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \\times H^l(M_g) \\to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \\mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p>\n\n<p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \\mathbb{Z}^{2g}$ has generators $q^*(\\alpha_i)$, $q^*(\\beta_i)$, $i=1, \\dots, g$ and that $H^2(M_g) = \\mathbb{Z}$ is generated by $q^*(\\alpha_1 \\cup \\beta_1) = \\cdots =\\pm q^*(\\alpha_g \\cup \\beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\\alpha_i)\\cup q^*(\\beta_i) =q^*(\\alpha_i \\cup \\beta_i)$, $q^*(\\alpha_i) \\cup q^*(\\alpha_i) = q^*(\\alpha_i \\cup \\alpha_i)=0$, and likewise $q^*(\\beta_i)\\cup q^*(\\beta_i)= 0$. </p>\n\n<p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\\bigvee_i T^2)$, identifying the top $\\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>\n" } ]
Question: Computing the cohomology ring of the orientable genus $g$ surface by considering a quotient map. Answer 1: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 2: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 3: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 4: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 5: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 6: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 7: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 8: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 9: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 10: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 11: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 12: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 13: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 14: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 15: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 16: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 17: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 18: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 19: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 20: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 21: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 22: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 23: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 24: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 25: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 26: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 27: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 28: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 29: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 30: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 31: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 32: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 33: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 34: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 35: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 36: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p> Answer 37: <p>The comments on the previous answer express a desire for additional details, so I'll add some here for any future students looking at this question. Robin nicely explains how the only interesting multiplicative structure inside <span class="math-container">$H^*(M_g)$</span> comes from the cup product <span class="math-container">$H^1(M_g) \times H^1(M_g) \to H^2(M_g)$</span> so we focus on that. Fix the following notation: let <span class="math-container">$X := \bigvee_{i = 1}^g T_i$</span> with each <span class="math-container">$T_i$</span> a torus, let <span class="math-container">$A$</span> denote the subspace of <span class="math-container">$M_g$</span> equal to a <span class="math-container">$g$</span>-punctured sphere, let <span class="math-container">$i : A \to M_g$</span> denote inclusion, and let <span class="math-container">$q: M_g \to M_g/A = X$</span> denote the quotient map. As Robin mentions, the strategy is use the fact (Hatcher proposition 3.10) that the square <span class="math-container">$$ \require{AMScd} \begin{CD} H^1(X) \times H^1(X) @&gt;{\cup}&gt;&gt; H^2(X) \\ @V{q^* \times q^*} VV @V{q^*}VV \\ H^1(M_g) \times H_1(M_g) @&gt;{\cup}&gt;&gt; H^2(M_g) \end{CD} $$</span> commutes to get at the cup product structure on <span class="math-container">$M_g$</span> from (i) the (known) cup product structure on <span class="math-container">$X$</span> and (ii) the action of <span class="math-container">$q$</span> on cohomology. As (i) is addressed in the question and in Robin's answer, we focus on the details of (ii).</p> <p>Disclaimer: I am still very much a student of algebraic topology, so I welcome any comments for improving my argument!</p> <p>We consider a piece <span class="math-container">$$ \cdots \rightarrow H_1(A) \xrightarrow{i_*} H_1(M_g) \xrightarrow{q_*} H_1(X) \xrightarrow{\delta} H_0(A) \xrightarrow{i_*} H_0(M_g) \to \cdots $$</span> of the long exact sequence of homology induced by the pair <span class="math-container">$(M_g, A)$</span>. Regard <span class="math-container">$H_1$</span> as the abelianisation of <span class="math-container">$\pi_1$</span> to see that the map <span class="math-container">$i_* : H_1(A) \to H_1(M_g)$</span> is zero (because loops in the subspace <span class="math-container">$A$</span> lie in the commutator subgroup of <span class="math-container">$\pi_1(M_g)$</span>). Note also that <span class="math-container">$i_* : H_0(A) \to H_0(M_g)$</span> injects (by the definition of the zero-th homology) so that <span class="math-container">$\delta = 0$</span>. Thus, <span class="math-container">$q_*$</span> is encased by zero maps and so induces an isomorphism on first homology. By the universal coefficient theorem for cohomology (noticing that the Ext terms vanish here), it follows that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology.</p> <p>Understanding <span class="math-container">$q^*$</span> on second cohomology requires a bit more topology and in this case we'll think about <span class="math-container">$H_2$</span> in terms of simplicial homology. Let <span class="math-container">$q_i : X \to T_i$</span> denote the quotient map which collapses the wedge sum <span class="math-container">$X$</span> to the <span class="math-container">$i$</span>-th torus in the sum. Then we have a commutative diagram <span class="math-container">$$ \require{AMScd} \begin{CD} H_2(M_g) @&gt;{q_*}&gt;&gt; H^2(X) @&gt;{(q_i)_*}&gt;&gt; H^2(T_i) \\ @V{\cong} VV @V{\cong}VV @V{\cong}VV \\ \mathbb{Z} @&gt;&gt;&gt; \mathbb{Z}^g @&gt;&gt;&gt; \mathbb{Z}. \end{CD} $$</span> Letting <span class="math-container">$\sigma : \Delta^2 \to M_g$</span> represent the generating class in <span class="math-container">$H_2(M_g) \cong \mathbb{Z}$</span>, that <span class="math-container">$\sigma$</span> is not a boundary in <span class="math-container">$H_2(M_g)$</span> implies that <span class="math-container">$q_i \circ q \circ \sigma$</span> is itself not a boundary in <span class="math-container">$H_2(T_i) \cong \mathbb{Z}$</span>. In particular, <span class="math-container">$q_i \circ q \circ \sigma$</span> generates <span class="math-container">$H_2(T_i)$</span> so we see that <span class="math-container">$(q_i)_* \circ q_*$</span> maps 1 to 1 along the bottom row of the diagram. Now, the isomorphism <span class="math-container">$H_2(X) \cong \mathbb{Z}^g$</span> comes from the isomorphism <span class="math-container">$H_2(X) \cong \bigoplus_{i = 1}^g H_2(T_i)$</span> which is induced by the inclusions <span class="math-container">$T_i \hookrightarrow X$</span> (Hatcher corollary 2.25). So if <span class="math-container">$\sigma_i : \Delta^2 \to T_i$</span> represents a generator of <span class="math-container">$H_2(T_i)$</span> then the map <span class="math-container">$(q_i)_*$</span> sends <span class="math-container">$(0, \ldots, 0, \sigma_i, 0, \ldots, 0) \in H_2(X)$</span> to <span class="math-container">$\sigma_i$</span>. Along the bottom row of the diagram, if we regard <span class="math-container">$\mathbb{Z}^g$</span> as generated by the standard basis vectors <span class="math-container">$\{e_1, \ldots, e_g\}$</span>, then <span class="math-container">$(q_i)_*$</span> acts as <span class="math-container">$\varepsilon_i$</span> for <span class="math-container">$\varepsilon_i(e_j) := \delta_{ij}$</span>. We conclude that for each <span class="math-container">$i$</span> we have <span class="math-container">$\varepsilon_i(q_*(1)) = 1$</span> from which it follows that <span class="math-container">$q_*(1) = (1, \ldots, 1) \in \mathbb{Z}^g \cong H_2(X)$</span>. Again apply the universal coefficient theorem for cohomology (noting that the Ext terms still vanish) to see that <span class="math-container">$q^* : H^2(X) \to H^2(M_g)$</span> maps each of the <span class="math-container">$g$</span> generators of <span class="math-container">$H^2(X)$</span> to the same generator of <span class="math-container">$H^2(M_g)$</span>. In terms of the isomorphisms <span class="math-container">$H^2(X) \cong \mathbb{Z}^g$</span> and <span class="math-container">$H^2(M_g) \cong \mathbb{Z}$</span>, we have concluded that <span class="math-container">$q^*(e_i) = 1$</span> for all <span class="math-container">$i$</span>.</p> <p>With a precise understanding of the action of <span class="math-container">$q$</span> on cohomology, we may return to the original commuting square. We adopt the notation of the OP here: let <span class="math-container">$H^1(X)$</span> have generators <span class="math-container">$\{\alpha_i, \beta_i\}_{i = 1}^g$</span> and <span class="math-container">$H^2(X)$</span> have generators <span class="math-container">$\{\gamma_i\}_{i = 1}^g$</span> with <span class="math-container">$\alpha_i \beta_i = \gamma_i$</span> and all other products of generators zero (note that we use here the known cup product structure on the torus and that we adopt the convention that multiplication denotes a cup product). Then that <span class="math-container">$q^*$</span> induces an isomorphism on first cohomology implies that <span class="math-container">$H_1(M_g)$</span> has generators <span class="math-container">$\alpha_i' := q^*(\alpha_i)$</span> and <span class="math-container">$\beta_i' := q^*(\beta_i)$</span>. And the action of <span class="math-container">$q^*$</span> on <span class="math-container">$H^2$</span> means that <span class="math-container">$H^2(M_g)$</span> has generator <span class="math-container">$\gamma' := q^*(\gamma_1) = \cdots = q^*(\gamma_g)$</span>. By our very first commuting square, we conclude that</p> <p><strong>Theorem:</strong> The cohomology ring <span class="math-container">$\tilde{H}^*(M_g)$</span> is generated by <span class="math-container">$\{\alpha_i', \beta_i'\}_{i=1}^g \cup \{\gamma'\}$</span> with <span class="math-container">$\alpha_i'$</span> and <span class="math-container">$\beta_i'$</span> of weight one, <span class="math-container">$\gamma'$</span> of weight two, and nonzero cup product relations <span class="math-container">$\alpha_i' \beta_i' = \gamma'$</span> for all <span class="math-container">$i$</span> (all other cup products of generators are zero).</p> Answer 38: <p>The statement $H^*(\bigvee_i T^2) = \bigoplus_i \bigwedge[\alpha_i, \beta_i]$ is actually wrong. The formula you are thinking of only works for reduced cohomology in a given dimension and unfortunately reduced cohomology doesn't work with cup products since the zeroth reduced group is zero. We actually have that $H^*(\bigvee_i T^2)$ is the quotient of the direct sum, identifying the various $\mathbb{Z} \cong \bigwedge^0 [\alpha_i , \beta_i]$ together. (So its kinda like a wedge sum of graded abelian groups whatever that means). I think this is easy enough to see by considering the map from the disjoint union to the wedge sum.</p> <p>To compute the ring structure on $M_g$ we first compute the abelian groups $H^i(M_g)$ and then use $q^*$ to determine the ring structure. </p> <p>The cohomology groups of $M_g$ are actually the same as the homology groups by universal coefficients. Indeed the homology groups of $M_g$ are free abelian groups $H^0(M_g)=\mathbb{Z}$, $H^1(M_g)=\mathbb{Z}^{2g}$, $H^2(M_g)=\mathbb{Z}$ so the Ext terms vanish and we get isomorphisms $H^k(M_g)= \text{Hom}(H_k(M_g), \mathbb{Z}) = H_k(M_g)$. All other cohomology groups in higher dimensions are zero of course.</p> <p>Because the cup product are maps $H^k(M_g) \times H^l(M_g) \to H^{k+l}(M_g)$ and the cohomology is zero above dimension two it follows that the only nontrivial cup product will be $H^1(M_g) \times H^1(M_g)$. (We also have the trivial cup product $H^0(M_g) \times H^l(M_g) \to H^l(M_g)$. This is the usual multiplication by integers since $H^0(M_g) = \mathbb{Z}$ consists of constant integer valued functions on the points of $M_g$.)</p> <p>We can use $q$ to determine the cup product here, but I'll only give a sketch. Specifically we can show that $H^1(M_g) = \mathbb{Z}^{2g}$ has generators $q^*(\alpha_i)$, $q^*(\beta_i)$, $i=1, \dots, g$ and that $H^2(M_g) = \mathbb{Z}$ is generated by $q^*(\alpha_1 \cup \beta_1) = \cdots =\pm q^*(\alpha_g \cup \beta_g)$. Then because $q^*$ is a ring homomorphism it follows that $q^*(\alpha_i)\cup q^*(\beta_i) =q^*(\alpha_i \cup \beta_i)$, $q^*(\alpha_i) \cup q^*(\alpha_i) = q^*(\alpha_i \cup \alpha_i)=0$, and likewise $q^*(\beta_i)\cup q^*(\beta_i)= 0$. </p> <p>In conclusion, $H^*(M_g)$ is the quotient of $H^*(\bigvee_i T^2)$, identifying the top $\mathbb{Z}$ summand and this has a natural ring structure (but no good notation as far as I'm aware).</p>
Find $\int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\mathrm dx$
2,099,671
10
https://math.stackexchange.com/questions/2099671/find-int-0-infty-frac-cos-x-cos-x2x-mathrm-dx
<p>Recently, I met a integration below \begin{align*} \int_{0}^{\infty }\frac{\sin x-\sin x^2}{x}\mathrm{d}x&amp;=\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x-\int_{0}^{\infty }\frac{\sin x^{2}}{x}\mathrm{d}x\\ &amp;=\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x-\frac{1}{2}\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x\\ &amp;=\frac{1}{2}\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x=\frac{\pi }{4} \end{align*} the same way seems doesn't work in $$\int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\mathrm dx$$ but why? Then how to evaluate it? Thx!</p>
[ { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" }, { "answer_id": 2099677, "score": 11, "text": "<p>Because $\\displaystyle \\int_{0}^{\\infty }\\frac{\\cos x}{x}\\, \\mathrm{d}x$ does not converge, you can see <a href=\"https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx\">here</a> for a proof.</p>\n\n<p>So we have to find another way to evaluate it.</p>\n\n<p>I'll think about it and post a solution later. </p>\n\n<hr>\n\n<p><strong>Solution:</strong>\n\\begin{align*}\n\\int_{0}^{\\infty }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\int_{0}^{\\alpha }\\frac{\\cos x-\\cos x^2}{x}\\,\\mathrm{d}x=\\lim_{\\alpha \\rightarrow \\infty }-\\int_{0}^{\\alpha }\\frac{1-\\cos x+\\cos x^2-1}{x}\\,\\mathrm{d}x\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\int_{0}^{\\alpha }\\frac{1-\\cos x^2}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left ( -\\int_{0}^{\\alpha }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x+\\frac{1}{2}\\int_{0}^{\\alpha^{2} }\\frac{1-\\cos x}{x}\\,\\mathrm{d}x \\right )\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left \\{ \\mathrm{Ci}\\left ( \\alpha \\right )-\\gamma -\\ln\\alpha +\\frac{1}{2}\\left [ \\gamma +\\ln\\alpha ^{2}-\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ] \\right \\}\\\\\n&amp;=\\lim_{\\alpha \\rightarrow \\infty }\\left [ -\\frac{\\gamma }{2}+\\mathrm{Ci}\\left ( \\alpha \\right )-\\frac{1}{2}\\mathrm{Ci}\\left ( \\alpha ^{2} \\right ) \\right ]\\\\\n&amp;=-\\frac{\\gamma }{2}\n\\end{align*}\nwhere $\\mathrm{Ci}\\left ( \\cdot \\right )$ is <a href=\"https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral\" rel=\"nofollow noreferrer\">Cosine Integral</a> and we can easily find that $\\mathrm{Ci}\\left ( \\alpha \\right )$ goes to $0$ when $\\alpha \\rightarrow \\infty $.</p>\n" }, { "answer_id": 2100309, "score": 8, "text": "<p>We begin by noting that we can write the integral of interest as</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{\\cos(x)-\\cos(x^2)}{x}\\,dx&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\int_0^\\infty\\frac{\\sin(x)-\\sin(x^2)}{x}\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 \\tag 1\n\\end{align}$$</span></p>\n<p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class=\"math-container\">$(1)$</span> as</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{ix}-e^{ix^2}}{x}\\,dx-i\\pi/4 =\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx \\tag 2$$</span></p>\n<p>Integrating by parts the integral on the right-hand side of <span class=\"math-container\">$(2)$</span> with <span class=\"math-container\">$u=e^{-x}-e^{-x^2}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> reveals</p>\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty \\frac{e^{-x}-e^{-x^2}}{x}\\,dx &amp;=\\int_0^\\infty \\log(x) e^{-x}\\,dx-\\int_0^\\infty 2x\\log(x)e^{-x^2}\\,dx\\\\\\\\\n&amp;=\\frac12 \\int_0^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\frac12\\gamma\n\\end{align}$$</span></p>\n<p>And we are done!</p>\n<hr />\n<blockquote>\n<p><strong>NOTE:</strong></p>\n</blockquote>\n<blockquote>\n<p>In the note at the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span>.</p>\n</blockquote>\n<hr />\n<hr />\n<p><strong>EDIT: A SECOND METHODOLOGY:</strong></p>\n<p>It is straightforward to show that for <span class=\"math-container\">$t&gt;0$</span></p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=0\\tag3$$</span></p>\n<p>Simply enforce the substitution <span class=\"math-container\">$x^t\\mapsto x$</span> to reduce the integral in <span class=\"math-container\">$(3)$</span> to</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^t)-e^{-x^t}}{x}\\,dx=\\frac1t \\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx$$</span></p>\n<p>Then, exploiting a property of the Laplace Transform, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x)-e^{-x}}{x}\\,dx=\\int_0^\\infty \\left(\\frac{x}{x^2+1}-\\frac{1}{x+1}\\right)\\,dx$$</span></p>\n<p>which is easily evaluated as <span class=\"math-container\">$0$</span>.</p>\n<p>Hence, we can write</p>\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{\\cos(x^2)-\\cos(x)}{x}\\,dx=\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x}\\,dx\\tag4$$</span></p>\n<p>The integral on the the right-hand side of <span class=\"math-container\">$(4)$</span> is identical to the integral on the the right-hand side of <span class=\"math-container\">$(2)$</span>, from which we obtain the previous result!</p>\n" } ]
Question: Find $\int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\mathrm dx$ Answer 1: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 2: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 3: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 4: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 5: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 6: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 7: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 8: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 9: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 10: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 11: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 12: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 13: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 14: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 15: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 16: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 17: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 18: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 19: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 20: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 21: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 22: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 23: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 24: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 25: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 26: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 27: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 28: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 29: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 30: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 31: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 32: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 33: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 34: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 35: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 36: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p> Answer 37: <p>Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see <a href="https://math.stackexchange.com/questions/978558/integrate-displaystyle-int-0-infty-frac-cos-xx-dx">here</a> for a proof.</p> <p>So we have to find another way to evaluate it.</p> <p>I'll think about it and post a solution later. </p> <hr> <p><strong>Solution:</strong> \begin{align*} \int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&amp;=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\ &amp;=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\ &amp;=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\ &amp;=-\frac{\gamma }{2} \end{align*} where $\mathrm{Ci}\left ( \cdot \right )$ is <a href="https://en.wikipedia.org/wiki/Trigonometric_integral#Cosine_integral" rel="nofollow noreferrer">Cosine Integral</a> and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.</p> Answer 38: <p>We begin by noting that we can write the integral of interest as</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &amp;=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$</span></p> <p>Using Cauchy's Integral Theorem, we can write the right-hand side of <span class="math-container">$(1)$</span> as</p> <p><span class="math-container">$$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$</span></p> <p>Integrating by parts the integral on the right-hand side of <span class="math-container">$(2)$</span> with <span class="math-container">$u=e^{-x}-e^{-x^2}$</span> and <span class="math-container">$v=\log(x)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &amp;=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &amp;=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\frac12\gamma \end{align}$$</span></p> <p>And we are done!</p> <hr /> <blockquote> <p><strong>NOTE:</strong></p> </blockquote> <blockquote> <p>In the note at the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span>.</p> </blockquote> <hr /> <hr /> <p><strong>EDIT: A SECOND METHODOLOGY:</strong></p> <p>It is straightforward to show that for <span class="math-container">$t&gt;0$</span></p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$</span></p> <p>Simply enforce the substitution <span class="math-container">$x^t\mapsto x$</span> to reduce the integral in <span class="math-container">$(3)$</span> to</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$</span></p> <p>Then, exploiting a property of the Laplace Transform, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$</span></p> <p>which is easily evaluated as <span class="math-container">$0$</span>.</p> <p>Hence, we can write</p> <p><span class="math-container">$$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$</span></p> <p>The integral on the the right-hand side of <span class="math-container">$(4)$</span> is identical to the integral on the the right-hand side of <span class="math-container">$(2)$</span>, from which we obtain the previous result!</p>
Sum of given infinite series: $\frac14+\frac2{4 \cdot 7}+\frac3{4 \cdot 7 \cdot 10}+\frac4{4 \cdot 7 \cdot 10 \cdot 13 }+....$
2,100,279
10
https://math.stackexchange.com/questions/2100279/sum-of-given-infinite-series-frac14-frac24-cdot-7-frac34-cdot-7-cdot
<p>Find the sum of infinite series</p> <p>$$\frac{1}{4}+\frac{2}{4 \cdot 7}+\frac{3}{4 \cdot 7 \cdot 10}+\frac{4}{4 \cdot 7 \cdot 10 \cdot 13 }+....$$</p> <p>Generally I do these questions by finding sum of $n$ terms and then putting $ \lim{n \to \infty}$ but here I am not able to find sum of $n$ terms. Could some suggest as how to proceed?</p>
[ { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" }, { "answer_id": 2100433, "score": 8, "text": "<p>Notice that</p>\n\n<p>$$\\frac k{\\prod_{m=1}^k(3m+1)}=\\frac1{3\\prod_{m = 1}^{k-1} (3m+1)}-\\frac{1}{3\\prod_{m = 1}^k (3m+1)}$$</p>\n\n<p>Which gives us a telescoping series:$$S_N=\\frac{1}{3} - \\frac{1}{3\\prod_{m = 1}^N (3m+1)}$$</p>\n\n<p>which tends to $1/3$ as suspected.</p>\n" }, { "answer_id": 2100307, "score": 1, "text": "<p>The partial sums, according to Maple, are\n$$-{\\frac {2\\,{3}^{1/2-N}\\pi}{27\\,\\Gamma \\left( 4/3+N \\right) \\Gamma\n \\left( 2/3 \\right) }}+\\frac{1}{3}\n$$\nIt should be possible to prove that by induction.</p>\n" } ]
Question: Sum of given infinite series: $\frac14+\frac2{4 \cdot 7}+\frac3{4 \cdot 7 \cdot 10}+\frac4{4 \cdot 7 \cdot 10 \cdot 13 }+....$ Answer 1: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 2: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 3: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 4: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 5: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 6: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 7: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 8: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 9: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 10: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 11: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 12: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 13: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 14: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 15: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 16: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 17: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 18: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 19: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 20: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 21: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 22: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 23: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 24: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 25: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 26: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 27: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 28: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 29: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 30: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 31: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 32: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 33: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 34: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 35: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 36: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p> Answer 37: <p>Notice that</p> <p>$$\frac k{\prod_{m=1}^k(3m+1)}=\frac1{3\prod_{m = 1}^{k-1} (3m+1)}-\frac{1}{3\prod_{m = 1}^k (3m+1)}$$</p> <p>Which gives us a telescoping series:$$S_N=\frac{1}{3} - \frac{1}{3\prod_{m = 1}^N (3m+1)}$$</p> <p>which tends to $1/3$ as suspected.</p> Answer 38: <p>The partial sums, according to Maple, are $$-{\frac {2\,{3}^{1/2-N}\pi}{27\,\Gamma \left( 4/3+N \right) \Gamma \left( 2/3 \right) }}+\frac{1}{3} $$ It should be possible to prove that by induction.</p>
Real Analysis Methodologies to show $\gamma =2\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx$
2,100,567
10
https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos
<p>In <a href="https://math.stackexchange.com/questions/2099671/find-int-0-infty-frac-cos-x-cos-x2x-mathrm-dx/2100309#2100309">THIS ANSWER</a>, I used straightforward complex analysis to show that </p> <blockquote> <p>$$\gamma =2\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx \tag 1$$</p> </blockquote> <p>where $\gamma =-\int_0^\infty \log(x) e^{-x}\,dx$ is the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant" rel="nofollow noreferrer">Euler-Mascheroni Constant</a>.</p> <p>The key in the derivation of $(1)$ was to transform the cosine terms into real exponential ones. </p> <p>To date, I have been unable to use strictly real analysis, without appealing to tabulated results of special functions (e.g., use of the $\text{Cin}(x)$ and $\text{Ci}(x)$ functions), to prove $(1)$. </p> <p>I have tried introducing a parameter and using "Feynman's Trick to augment the integral into something manageable. Or somewhat equivalently, rewriting the integral in $(1)$ as a double integral and proceeding by exploiting Fubini-Tonelli. </p> <blockquote> <p><strong>QUESTION: What are ways to prove $(1)$ without relying on complex analysis and without simply appealing to tabulated relationships of special functions. For example, stating that the $Ci(x)$ function is defined as $\text{Ci}(x)\equiv -\int_x^\infty\frac{\cos(t)}{t}\,dt=\gamma +\log(x) +\int_0^x \frac{\cos(t)-1}{t}\,dt$ is unsatisfactory unless one proves the latter equality.</strong></p> </blockquote>
[ { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2101138, "score": 18, "text": "<p>For\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\ln t\\, \\mathrm{d}t$$\nusing\n$$\\ln t=\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s$$\nwe have\n$$\\Gamma '\\left ( x \\right )=\\int_{0}^{\\infty }e^{-t}t^{x-1}\\int_{0}^{\\infty }\\frac{e^{-s}-e^{-ts}}{s}\\, \\mathrm{d}s\\mathrm{d}t=\\Gamma \\left ( x \\right )\\int_{0}^{\\infty }\\left ( e^{-s}-\\frac{1}{\\left ( s+1 \\right )^{x}} \\right )\\frac{\\mathrm{d}s}{s}$$\nHence, let $x=1$ we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{s+1 }-e^{-s} \\right )\\frac{\\mathrm{d}s}{s}$$\nlet $s=t^k,~k&gt;0$, we get\n$$\\gamma =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{k}+1 }-e^{-t^{k}} \\right )\\frac{k\\, \\mathrm{d}t}{t}$$\nSo,let $k=a,b$\n$$\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{a}+1 }-e^{-t^{a}} \\right )\\frac{ \\mathrm{d}t}{t}~~,~~\\frac{\\gamma}{b} =\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }-e^{-t^{b}} \\right )\\frac{ \\mathrm{d}t}{t}$$\nhence\n$$\\frac{\\gamma}{b}-\\frac{\\gamma}{a} =\\int_{0}^{\\infty }\\left [\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )+\\left ( e^{-x^a}-e^{-x^b} \\right ) \\right ]\\frac{ \\mathrm{d}t}{t}$$\nthen\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}+\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nlet $t\\rightarrow \\dfrac{1}{t}$,we get\n$$\\int_{0}^{1}\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=-\\int_{1}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}$$\nSo\n$$\\int_{0}^{\\infty }\\left ( \\frac{1}{t^{b}+1 }- \\frac{1}{t^{a}+1 } \\right )\\frac{\\mathrm{d}t}{t}=0$$\nand\n$$\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma =\\int_{0}^{\\infty }\\frac{e^{-t^a}-e^{-t^b}}{t}\\, \\mathrm{d}t\\tag1$$\n<strong>Lemma:</strong></p>\n\n<blockquote>\n <p>$$\\int_{0}^{\\infty }\\frac{e^{-t^a}-\\cos t^a}{t}\\, \\mathrm{d}t=0~,~a&gt;0$$</p>\n</blockquote>\n\n<p><strong>Proof:</strong>\nLet $$f\\left ( x \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, e^{-xt}\\, \\mathrm{d}t$$\nso\n$$f'\\left ( x \\right )=\\int_{0}^{\\infty }\\left ( \\cos t-e^{-t} \\right )e^{-xt}\\, \\mathrm{d}t=\\frac{x}{1+x^2}-\\frac{1}{1+x}$$\nhence\n$$\\int_{0}^{\\infty }f'\\left ( x \\right ) \\mathrm{d}x=\\ln\\frac{\\sqrt{1+x^2}}{1+x}\\Bigg|_{0}^{\\infty }=0=f\\left ( \\infty \\right )-f\\left ( 0 \\right )$$\nIt's easy to see that $f\\left ( \\infty \\right )=0$,so\n$$f\\left ( 0 \\right )=\\int_{0}^{\\infty }\\frac{e^{-t}-\\cos t}{t}\\, \\mathrm{d}t=0$$\nLet $x^a\\to t$, we get\n$$a\\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\Rightarrow \\int_{0}^{\\infty }\\frac{e^{-t^{a}}-\\cos t^{a}}{t}\\, \\mathrm{d}t=0\\tag2$$\nNow using $(1)$ and $(2)$, we get\n$$\\Large\\boxed{\\color{Blue} {\\int_{0}^{\\infty }\\frac{\\cos x^{a}-\\cos x^b}{x}\\, \\mathrm{d}x=\\left ( \\frac{1}{b}-\\frac{1}{a} \\right )\\gamma }}$$</p>\n" }, { "answer_id": 2101763, "score": 8, "text": "<blockquote>\n <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p>\n</blockquote>\n\n<p>The Lemma proved in the aforementioned post is expressed as </p>\n\n<p><span class=\"math-container\">$$\\int_0^\\infty \\frac{e^{-x^a}-\\cos(x^a)}{x}\\,dx=0 \\tag 1$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span>.</p>\n\n<hr>\n\n<p>We now examine a generalized version of the integral of interest and write</p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{\\cos(x^a)-\\cos(x^b)}{x}\\,dx \\tag 2$$</span></p>\n\n<p>for <span class=\"math-container\">$a&gt;0$</span> and <span class=\"math-container\">$b&gt;0$</span>.</p>\n\n<hr>\n\n<p>Using <span class=\"math-container\">$(1)$</span> reveals that <span class=\"math-container\">$(2)$</span> can be written as </p>\n\n<p><span class=\"math-container\">$$I(a,b)=\\int_0^\\infty \\frac{e^{-x^a}-e^{-x^b}}{x}\\,dx \\tag 3$$</span></p>\n\n<p>Next, we integrate by parts the integral in <span class=\"math-container\">$(3)$</span> with <span class=\"math-container\">$u=e^{-x^a}-e^{-x^b}$</span> and <span class=\"math-container\">$v=\\log(x)$</span> to obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\nI(a,b)&amp;=\\int_0^\\infty \\left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\\right)\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\int_0^\\infty ax^{a-1}e^{-x^a}\\,\\log(x)\\,dx-\\int_0^\\infty bx^{b-1}e^{-x^b}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\frac1a \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx-\\frac1b \\int_0^\\infty e^{-x}\\,\\log(x)\\,dx\\\\\\\\\n&amp;=-\\left(\\frac1a -\\frac1b\\right)\\,\\gamma\n\\end{align}$$</span></p>\n\n<p>where we used the integral relationship <span class=\"math-container\">$\\gamma =-\\int_0^\\infty e^{-x}\\,\\log(x)$</span>.</p>\n\n<hr>\n\n<p><strong>NOTE:</strong></p>\n\n<p>We can show that <span class=\"math-container\">$\\gamma$</span> as given by <span class=\"math-container\">$\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$</span> is equal to <span class=\"math-container\">$\\gamma$</span> as expressed by the limit <span class=\"math-container\">$\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$</span> as follows.</p>\n\n<p><span class=\"math-container\">$$\\begin{align}\n\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx&amp;=\\lim_{n\\to \\infty}\\int_0^n \\left(1-\\frac xn\\right)^n\\,\\log(x)\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\int_0^1 x^n \\log(n(1-x))\\,dx\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} n \\left(\\log(n) \\int_0^1 x^n\\,dx+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)+\\int_0^1 x^n\\,\\log(1-x)\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n\\sum_{k=1}^\\infty \\frac1k \\int_0^1 x^{n+k}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\frac{n}{n+1}\\log(n)-n \\sum_{k=1}^\\infty \\frac1{k(k+n+1)}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^\\infty \\left(\\frac1k-\\frac1{k+n+1}\\right)\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\frac{n}{n+1}\\left(\\log(n)- \\sum_{k=1}^{n+1} \\frac1k\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty} \\left(\\log(n)- \\sum_{k=1}^{n} \\frac1k\\right)\\\\\\\\\n\\end{align}$$</span></p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2141879, "score": 8, "text": "<p>It turns out that we have the following observation:</p>\n\n<blockquote>\n <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\\infty) \\to \\Bbb{C}$, we have</p>\n \n <p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + c(f) + o(1) \\qquad \\text{as } \\epsilon \\to 0^+ \\tag{1} $$</p>\n \n <p>where the constant $c(f)$ is computed by</p>\n \n <p>$$ c(f) = \\lim_{R\\to\\infty}\\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0)\\log R\\right) - f(0)\\gamma. \\tag{2} $$</p>\n</blockquote>\n\n<p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\\mathbf{1}_{(0,1)}(x))/x$ and notice that</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -f(0)\\log\\epsilon + \\int_{\\epsilon}^{\\infty} g(x) \\, dx. $$</p>\n\n<p>Assuming that the LHS of $\\text{(1)}$ exists for all $\\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\\text{(1)}$. Next, notice that $c(f) = \\mathcal{L}g(0)$ and that $-(\\mathcal{L}g(s))' = \\mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p>\n\n<p>\\begin{align*}\nc(f)\n&amp;= \\lim_{R\\to\\infty} \\int_{0}^{R} (-\\mathcal{L}g(s))' \\, ds \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) (1 - e^{-R})\\log R + f(0) \\int_{0}^{R} e^{-s}\\log s \\, ds \\right) \\\\\n&amp;= \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\mathcal{L}f(s) \\, ds - f(0) \\log R \\right) - f(0)\\gamma.\n\\end{align*}</p>\n\n<hr>\n\n<p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \\cos x$. Now plugging $\\mathcal{L}f(s) = \\frac{s}{s^2+1}$ shows that $c(f) = -\\gamma$ and thus</p>\n\n<p>$$ \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1). $$</p>\n\n<p>Plugging this asymptotics, we have</p>\n\n<p>$$\n\\int_{\\epsilon}^{\\infty} \\frac{\\cos(x^2) - \\cos x}{x} \\, dx\n= \\frac{1}{2}\\int_{\\epsilon^2}^{\\infty} \\frac{\\cos x}{x} \\, dx - \\int_{\\epsilon}^{\\infty} \\frac{\\cos x}{x} \\, dx\n= \\frac{1}{2}\\gamma + o(1)\n$$</p>\n\n<p>and the identity follows by letting $\\epsilon \\to 0^+$.</p>\n\n<hr>\n\n<p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\\int_{0}^{\\infty} \\frac{f(x)}{x} \\, dx$. This has the following nice properties (whenever they exist)</p>\n\n<ul>\n<li>$c$ is linear: $c(\\alpha f(x) + \\beta g(x)) = \\alpha c(f) + \\beta c(g)$.</li>\n<li>$c(f(x^p)) = \\frac{1}{p}c(f)$ for $p &gt; 0$,</li>\n<li>$c(f(px)) = c(f) - f(0)\\log p$ for $p &gt; 0$,</li>\n</ul>\n\n<p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\\cos x) = -\\gamma$ and $c(e^{-x}) = -\\gamma$, we have</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\frac{\\cos (x^p) - \\exp(-x^q)}{x} \\, dx\n&amp;= c\\left\\{ \\cos (x^p) - \\exp(-x^q) \\right\\} \\\\\n&amp;= \\frac{1}{p}c(\\cos x) - \\frac{1}{q}c(e^{-x})\n = \\gamma\\left( \\frac{1}{q} - \\frac{1}{p}\\right)\n\\end{align*}</p>\n\n<p>for $p, q &gt; 0$.</p>\n" }, { "answer_id": 2143538, "score": 4, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$</p>\n\n<blockquote>\n <p><strong>A '$\\ds{\\color{#f00}{complex\\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x =\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x^{2}} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n2\\lim_{\\Lambda \\to \\infty}\\bracks{%\n\\int_{0}^{\\Lambda}{1 - \\cos\\pars{x} \\over x}\\,\\dd x -\n{1 \\over 2}\\int_{0}^{\\Lambda^{2}}{1 - \\cos\\pars{x} \\over x}\\,\\dd x}\n\\label{1}\\tag{1}\n\\end{align}</p>\n\n<hr>\n\n<blockquote>\n <p>With $\\ds{R &gt; 0}$:</p>\n</blockquote>\n\n<p>\\begin{align}\n&amp;\\int_{0}^{R}{1 - \\cos\\pars{x} \\over x}\\,\\dd x =\n\\Re\\int_{0}^{R}{1 - \\expo{\\ic x} \\over x}\\,\\dd x\n\\\\[5mm] = &amp;\n-\\Re\\int_{0}^{\\pi/2}\\bracks{1 - \\exp\\pars{\\ic R\\expo{\\ic\\theta}}}\\ic\\,\\dd\\theta -\n\\Re\\int_{R}^{0}{1 - \\expo{-y} \\over y}\\,\\dd y\n\\\\[5mm] = &amp;\\\n-\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta +\n\\ln\\pars{R}\\pars{1 - \\expo{-R}} - \\int_{0}^{R}\\ln\\pars{y}\\expo{-y}\\,\\dd y\n\\\\[5mm] \\stackrel{\\mrm{as}\\ R\\ \\to\\ \\infty}{\\sim}\\,\\,\\, &amp;\\\n\\ln\\pars{R} - \\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\bbx{\\ds{\\ln\\pars{R} + \\gamma}}\n\\label{2}\\tag{2}\n\\end{align}</p>\n\n<blockquote>\n <p>because</p>\n</blockquote>\n\n<p>$$\n\\left\\{\\begin{array}{l}\n\\ds{0 &lt; \\verts{\\int_{0}^{\\pi/2}\\sin\\pars{R\\cos\\pars{\\theta}}\n\\expo{-R\\sin\\pars{\\theta}}\\,\\dd\\theta} &lt; \\int_{0}^{\\pi/2}\n\\exp\\pars{-\\,{2R \\over \\pi}\\,\\theta}\\,\\dd\\theta =\n{\\pi \\over 2}\\,{1 - \\expo{-R} \\over R}}\n\\\\[5mm]\n\\mbox{and}\\\n\\ds{\\int_{0}^{\\infty}\\ln\\pars{y}\\expo{-y}\\,\\dd y =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}y^{\\mu}\\expo{-y}\\,\\dd y\\,\n\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\\right\\vert_{\\ \\mu\\ =\\ 0} = \\Psi\\pars{1} = -\\gamma}\n\\end{array}\\right.\n$$\n<hr>\nWith \\eqref{1} and \\eqref{2}:\n\\begin{align}\n2\\int_{0}^{\\infty}{\\cos\\pars{x^{2}} - \\cos\\pars{x} \\over x}\\,\\dd x &amp; =\n2\\braces{\\bracks{\\ln\\pars{\\Lambda} + \\gamma} -\n{1 \\over 2}\\bracks{\\ln\\pars{\\Lambda^{2}} + \\gamma}} = \\bbx{\\ds{\\gamma}}\n\\end{align}</p>\n" } ]
Question: Real Analysis Methodologies to show $\gamma =2\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx$ Answer 1: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 2: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 3: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 4: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 5: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 6: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 7: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 8: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 9: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 10: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 11: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 12: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 13: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 14: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 15: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 16: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 17: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 18: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 19: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 20: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 21: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 22: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 23: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 24: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 25: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 26: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 27: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 28: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 29: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 30: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 31: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 32: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 33: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 34: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 35: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 36: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 37: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 38: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 39: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 40: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 41: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 42: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 43: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 44: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 45: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 46: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 47: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 48: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 49: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 50: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 51: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 52: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 53: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 54: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 55: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 56: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 57: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 58: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 59: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 60: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 61: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 62: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 63: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 64: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 65: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 66: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 67: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 68: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 69: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 70: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 71: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 72: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p> Answer 73: <p>For $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\ln t\, \mathrm{d}t$$ using $$\ln t=\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s$$ we have $$\Gamma '\left ( x \right )=\int_{0}^{\infty }e^{-t}t^{x-1}\int_{0}^{\infty }\frac{e^{-s}-e^{-ts}}{s}\, \mathrm{d}s\mathrm{d}t=\Gamma \left ( x \right )\int_{0}^{\infty }\left ( e^{-s}-\frac{1}{\left ( s+1 \right )^{x}} \right )\frac{\mathrm{d}s}{s}$$ Hence, let $x=1$ we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{s+1 }-e^{-s} \right )\frac{\mathrm{d}s}{s}$$ let $s=t^k,~k&gt;0$, we get $$\gamma =\int_{0}^{\infty }\left ( \frac{1}{t^{k}+1 }-e^{-t^{k}} \right )\frac{k\, \mathrm{d}t}{t}$$ So,let $k=a,b$ $$\frac{\gamma}{a} =\int_{0}^{\infty }\left ( \frac{1}{t^{a}+1 }-e^{-t^{a}} \right )\frac{ \mathrm{d}t}{t}~~,~~\frac{\gamma}{b} =\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }-e^{-t^{b}} \right )\frac{ \mathrm{d}t}{t}$$ hence $$\frac{\gamma}{b}-\frac{\gamma}{a} =\int_{0}^{\infty }\left [\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )+\left ( e^{-x^a}-e^{-x^b} \right ) \right ]\frac{ \mathrm{d}t}{t}$$ then $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}+\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ let $t\rightarrow \dfrac{1}{t}$,we get $$\int_{0}^{1}\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=-\int_{1}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}$$ So $$\int_{0}^{\infty }\left ( \frac{1}{t^{b}+1 }- \frac{1}{t^{a}+1 } \right )\frac{\mathrm{d}t}{t}=0$$ and $$\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma =\int_{0}^{\infty }\frac{e^{-t^a}-e^{-t^b}}{t}\, \mathrm{d}t\tag1$$ <strong>Lemma:</strong></p> <blockquote> <p>$$\int_{0}^{\infty }\frac{e^{-t^a}-\cos t^a}{t}\, \mathrm{d}t=0~,~a&gt;0$$</p> </blockquote> <p><strong>Proof:</strong> Let $$f\left ( x \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, e^{-xt}\, \mathrm{d}t$$ so $$f'\left ( x \right )=\int_{0}^{\infty }\left ( \cos t-e^{-t} \right )e^{-xt}\, \mathrm{d}t=\frac{x}{1+x^2}-\frac{1}{1+x}$$ hence $$\int_{0}^{\infty }f'\left ( x \right ) \mathrm{d}x=\ln\frac{\sqrt{1+x^2}}{1+x}\Bigg|_{0}^{\infty }=0=f\left ( \infty \right )-f\left ( 0 \right )$$ It's easy to see that $f\left ( \infty \right )=0$,so $$f\left ( 0 \right )=\int_{0}^{\infty }\frac{e^{-t}-\cos t}{t}\, \mathrm{d}t=0$$ Let $x^a\to t$, we get $$a\int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\Rightarrow \int_{0}^{\infty }\frac{e^{-t^{a}}-\cos t^{a}}{t}\, \mathrm{d}t=0\tag2$$ Now using $(1)$ and $(2)$, we get $$\Large\boxed{\color{Blue} {\int_{0}^{\infty }\frac{\cos x^{a}-\cos x^b}{x}\, \mathrm{d}x=\left ( \frac{1}{b}-\frac{1}{a} \right )\gamma }}$$</p> Answer 74: <blockquote> <p><strong>I thought it might be instructive to post a solution that leverages the Lemma posted by @Renascence_5. To that end, we proceed.</strong> </p> </blockquote> <p>The Lemma proved in the aforementioned post is expressed as </p> <p><span class="math-container">$$\int_0^\infty \frac{e^{-x^a}-\cos(x^a)}{x}\,dx=0 \tag 1$$</span></p> <p>for <span class="math-container">$a&gt;0$</span>.</p> <hr> <p>We now examine a generalized version of the integral of interest and write</p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{\cos(x^a)-\cos(x^b)}{x}\,dx \tag 2$$</span></p> <p>for <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b&gt;0$</span>.</p> <hr> <p>Using <span class="math-container">$(1)$</span> reveals that <span class="math-container">$(2)$</span> can be written as </p> <p><span class="math-container">$$I(a,b)=\int_0^\infty \frac{e^{-x^a}-e^{-x^b}}{x}\,dx \tag 3$$</span></p> <p>Next, we integrate by parts the integral in <span class="math-container">$(3)$</span> with <span class="math-container">$u=e^{-x^a}-e^{-x^b}$</span> and <span class="math-container">$v=\log(x)$</span> to obtain</p> <p><span class="math-container">$$\begin{align} I(a,b)&amp;=\int_0^\infty \left(ax^{a-1}e^{-x^a}-bx^{b-1}e^{-x^b}\right)\,\log(x)\,dx\\\\ &amp;=\int_0^\infty ax^{a-1}e^{-x^a}\,\log(x)\,dx-\int_0^\infty bx^{b-1}e^{-x^b}\,\log(x)\,dx\\\\ &amp;=\frac1a \int_0^\infty e^{-x}\,\log(x)\,dx-\frac1b \int_0^\infty e^{-x}\,\log(x)\,dx\\\\ &amp;=-\left(\frac1a -\frac1b\right)\,\gamma \end{align}$$</span></p> <p>where we used the integral relationship <span class="math-container">$\gamma =-\int_0^\infty e^{-x}\,\log(x)$</span>.</p> <hr> <p><strong>NOTE:</strong></p> <p>We can show that <span class="math-container">$\gamma$</span> as given by <span class="math-container">$\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$</span> is equal to <span class="math-container">$\gamma$</span> as expressed by the limit <span class="math-container">$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$</span> as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty e^{-x}\,\log(x)\,dx&amp;=\lim_{n\to \infty}\int_0^n \left(1-\frac xn\right)^n\,\log(x)\,dx\\\\ &amp;=\lim_{n\to \infty} n \int_0^1 x^n \log(n(1-x))\,dx\\\\ &amp;=\lim_{n\to \infty} n \left(\log(n) \int_0^1 x^n\,dx+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)+\int_0^1 x^n\,\log(1-x)\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n\sum_{k=1}^\infty \frac1k \int_0^1 x^{n+k}\,dx\right)\\\\ &amp;=\lim_{n\to \infty} \left(\frac{n}{n+1}\log(n)-n \sum_{k=1}^\infty \frac1{k(k+n+1)}\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^\infty \left(\frac1k-\frac1{k+n+1}\right)\right)\\\\ &amp;=\lim_{n\to \infty} \frac{n}{n+1}\left(\log(n)- \sum_{k=1}^{n+1} \frac1k\right)\\\\ &amp;=\lim_{n\to \infty} \left(\log(n)- \sum_{k=1}^{n} \frac1k\right)\\\\ \end{align}$$</span></p> <p>as was to be shown!</p> Answer 75: <p>It turns out that we have the following observation:</p> <blockquote> <p><strong>Observation.</strong> For a <em>nice</em> function $f : [0,\infty) \to \Bbb{C}$, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$</p> <p>where the constant $c(f)$ is computed by</p> <p>$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$</p> </blockquote> <p>The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that</p> <p>$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$</p> <p>Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon &gt; 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore</p> <p>\begin{align*} c(f) &amp;= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\ &amp;= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma. \end{align*}</p> <hr> <p>At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$</p> <p>Plugging this asymptotics, we have</p> <p>$$ \int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx = \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = \frac{1}{2}\gamma + o(1) $$</p> <p>and the identity follows by letting $\epsilon \to 0^+$.</p> <hr> <p>Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)</p> <ul> <li>$c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.</li> <li>$c(f(x^p)) = \frac{1}{p}c(f)$ for $p &gt; 0$,</li> <li>$c(f(px)) = c(f) - f(0)\log p$ for $p &gt; 0$,</li> </ul> <p>Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have</p> <p>\begin{align*} \int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx &amp;= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\ &amp;= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x}) = \gamma\left( \frac{1}{q} - \frac{1}{p}\right) \end{align*}</p> <p>for $p, q &gt; 0$.</p> Answer 76: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p><strong>A '$\ds{\color{#f00}{complex\ like}}$' answer can still be useful for other users. So, that is the reason I like to put forward the following answer:</strong></p> </blockquote> <p>\begin{align} &amp;2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x = 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - \int_{0}^{\Lambda}{1 - \cos\pars{x^{2}} \over x}\,\dd x} \\[5mm] = &amp;\ 2\lim_{\Lambda \to \infty}\bracks{% \int_{0}^{\Lambda}{1 - \cos\pars{x} \over x}\,\dd x - {1 \over 2}\int_{0}^{\Lambda^{2}}{1 - \cos\pars{x} \over x}\,\dd x} \label{1}\tag{1} \end{align}</p> <hr> <blockquote> <p>With $\ds{R &gt; 0}$:</p> </blockquote> <p>\begin{align} &amp;\int_{0}^{R}{1 - \cos\pars{x} \over x}\,\dd x = \Re\int_{0}^{R}{1 - \expo{\ic x} \over x}\,\dd x \\[5mm] = &amp; -\Re\int_{0}^{\pi/2}\bracks{1 - \exp\pars{\ic R\expo{\ic\theta}}}\ic\,\dd\theta - \Re\int_{R}^{0}{1 - \expo{-y} \over y}\,\dd y \\[5mm] = &amp;\ -\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta + \ln\pars{R}\pars{1 - \expo{-R}} - \int_{0}^{R}\ln\pars{y}\expo{-y}\,\dd y \\[5mm] \stackrel{\mrm{as}\ R\ \to\ \infty}{\sim}\,\,\, &amp;\ \ln\pars{R} - \int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \bbx{\ds{\ln\pars{R} + \gamma}} \label{2}\tag{2} \end{align}</p> <blockquote> <p>because</p> </blockquote> <p>$$ \left\{\begin{array}{l} \ds{0 &lt; \verts{\int_{0}^{\pi/2}\sin\pars{R\cos\pars{\theta}} \expo{-R\sin\pars{\theta}}\,\dd\theta} &lt; \int_{0}^{\pi/2} \exp\pars{-\,{2R \over \pi}\,\theta}\,\dd\theta = {\pi \over 2}\,{1 - \expo{-R} \over R}} \\[5mm] \mbox{and}\ \ds{\int_{0}^{\infty}\ln\pars{y}\expo{-y}\,\dd y = \left.\totald{}{\mu}\int_{0}^{\infty}y^{\mu}\expo{-y}\,\dd y\, \right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu}\right\vert_{\ \mu\ =\ 0} = \Psi\pars{1} = -\gamma} \end{array}\right. $$ <hr> With \eqref{1} and \eqref{2}: \begin{align} 2\int_{0}^{\infty}{\cos\pars{x^{2}} - \cos\pars{x} \over x}\,\dd x &amp; = 2\braces{\bracks{\ln\pars{\Lambda} + \gamma} - {1 \over 2}\bracks{\ln\pars{\Lambda^{2}} + \gamma}} = \bbx{\ds{\gamma}} \end{align}</p>
Naturality of tensors in Differential Geometry
2,102,494
10
https://math.stackexchange.com/questions/2102494/naturality-of-tensors-in-differential-geometry
<p>I have always had a hard time understanding the big picture of <em>tensors</em> and <em>tensor fields</em>. I have no problem understanding why low type tensors and tensor fields such as \begin{align} \text{$scalars$ and $smooth$ $functions$ --- type $(0,0)$,}\\ \text{$vectors$ and $vector$ $fields$ --- type $(1,0)$,}\\ \text{$covectors$ and $differential$ $forms$ --- type $(0,1)$,}\\ \text{$linear$ $transformations$ and $vector$-$field$ $morphisms$ --- type $(1,1)$,}\\ \text{$inner$ $products$ and $Riemannian$ $metrics$ --- type $(0,2)$ } \end{align} are so useful and natural. As these objects all share a lot of algebraic structure I understand the reason to encapsulate them within the notion of tensor, in an algebraic context. But, from an Analysis-Geometry setting, they are way different objects, so I see no natural reason to join together all these objects and expect the resulting object (tensors) to be of so much use in Differential Geometry. Yet they are, and they are everywhere!!</p> <blockquote> <p><strong>Q:</strong> Am I missing some reason of Analytic-Geometric character which motivates this generalization? If not, how does it turn out that an object whose generalization seem to be natural only algebraically end up playing such a central role in Differential Geometry? Maybe I am just underestimating the role that the algebraic structure play in this context?</p> </blockquote>
[ { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" }, { "answer_id": 2106874, "score": 3, "text": "<p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p>\n\n<p>Let me demonstrate how this works:</p>\n\n<p>Let $f = (f_1, \\dots, f_m) \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p>\n\n<ol>\n<li>The first derivative (or differential) of $f$ at a point $p \\in \\mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \\colon \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which satisfies\n$$ \\lim_{h \\to 0} \\frac{f(p + h) - f(p) - (Df|_p)(h)}{\\| h \\|_{\\mathbb{R}^n}} = 0. $$\nWhen $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\\mathbb{R}^n$ and $\\mathbb{R}^m$ as a $m \\times n$ matrix\n$$ \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_1}{\\partial x^n} \\\\\n\\vdots &amp; \\ddots &amp; \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x^1} &amp; \\dots &amp; \\frac{\\partial f_m}{\\partial x^n} \\end{pmatrix} $$\nbut when we move to the context of manifolds, we won't have a notion of \"standard bases\" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix.\nFinally, a linear map $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be identified naturally with an element of the tensor product $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$ and so the total first derivative $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$.</li>\n<li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \\mapsto Df(p)$ at $p$. The map $p \\mapsto Df(p)$ is a smooth map from $\\mathbb{R}^n$ to $\\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m)$ (which we can identify by using bases with $M_{m \\times n}(\\mathbb{R}) \\cong \\mathbb{R}^{m \\times n}$) so $(D^2f)(p)$ is a linear map with signature \n$$(D^2f)(p) \\colon \\mathbb{R}^n \\rightarrow \\operatorname{Hom}(\\mathbb{R}^n, \\mathbb{R}^m). $$\nSuch maps are naturally identified with bilinear maps $\\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ which again, can be identified as elements of $\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\left( \\mathbb{R}^n \\right)^{*} \\otimes \\mathbb{R}^m$. </li>\n</ol>\n\n<p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\\mathbb{R}^n$ to the space \n$$\\underbrace{\\left( \\mathbb{R}^n \\right)^{*} \\otimes \\dots \\otimes \\left( \\mathbb{R}^n \\right)^{*}}_{k\\text{ times}} \\otimes \\mathbb{R}^m $$\nwhich, in your notation, would be a $(0,k)$ ($\\mathbb{R}^m$-valued) tensor field on $\\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field.\nIf $n = m$ (and then we can think of $f$ as a vector field on $\\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\\mathbb{R}^n$. </p>\n" }, { "answer_id": 2102935, "score": 2, "text": "<p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p>\n\n<p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p>\n\n<p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p>\n\n<p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>\n" } ]
Question: Naturality of tensors in Differential Geometry Answer 1: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 2: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 3: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 4: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 5: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 6: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 7: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 8: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 9: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 10: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 11: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 12: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 13: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 14: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 15: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 16: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 17: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 18: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 19: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 20: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 21: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 22: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 23: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 24: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 25: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 26: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 27: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 28: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 29: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 30: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 31: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 32: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 33: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 34: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 35: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 36: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p> Answer 37: <p>While this is seldom emphasized, certain tensors already appear quite naturally in the context of multi-variable calculus on $\mathbb{R}^n$. If one wants to treat all the higher-order derivatives of a function $f$ in a unified, basis-independent and consistent way, one is lead naturally to the notion of (symmetric) tensors of various orders. From this perspective, it is natural to discuss tensors of arbitrary order together and bundle them up because higher order tensors appear naturally as the derivatives of lower order tensors. For example, if you care about the second derivative (also known as the Hessian) of a scalar function (a $(0,0)$-tensor), you should care about $(0,2)$-tensors. </p> <p>Let me demonstrate how this works:</p> <p>Let $f = (f_1, \dots, f_m) \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map (in the sense that all possible partial derivatives of the $f_i$ of all orders exist). Then:</p> <ol> <li>The first derivative (or differential) of $f$ at a point $p \in \mathbb{R}^n$ is defined as the unique linear map $(Df)(p) = Df|_p \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ which satisfies $$ \lim_{h \to 0} \frac{f(p + h) - f(p) - (Df|_p)(h)}{\| h \|_{\mathbb{R}^n}} = 0. $$ When $m = 1$, the scalar $Df|_p(h)$ gives us the directional derivative $\frac{d}{dt} f(p + th)|_{t = 0}$ of $f$ at a point $p$ in the direction $h$. In general, the linear map $Df|_p$ can be represented with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$ as a $m \times n$ matrix $$ \begin{pmatrix} \frac{\partial f_1}{\partial x^1} &amp; \dots &amp; \frac{\partial f_1}{\partial x^n} \\ \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial f_m}{\partial x^1} &amp; \dots &amp; \frac{\partial f_m}{\partial x^n} \end{pmatrix} $$ but when we move to the context of manifolds, we won't have a notion of "standard bases" so it is best to think of the first derivative $Df_p$ as a linear map and not as a matrix. Finally, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}^m$ can be identified naturally with an element of the tensor product $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$ and so the total first derivative $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$.</li> <li>The second derivative $(D^2f)(p) = D^2f|_p$ of $f$ at a point $p$ is the first derivative of the map $p \mapsto Df(p)$ at $p$. The map $p \mapsto Df(p)$ is a smooth map from $\mathbb{R}^n$ to $\operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m)$ (which we can identify by using bases with $M_{m \times n}(\mathbb{R}) \cong \mathbb{R}^{m \times n}$) so $(D^2f)(p)$ is a linear map with signature $$(D^2f)(p) \colon \mathbb{R}^n \rightarrow \operatorname{Hom}(\mathbb{R}^n, \mathbb{R}^m). $$ Such maps are naturally identified with bilinear maps $\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^m$ which again, can be identified as elements of $\left( \mathbb{R}^n \right)^{*} \otimes \left( \mathbb{R}^n \right)^{*} \otimes \mathbb{R}^m$. </li> </ol> <p>More generally, the $k$-th derivative of $f$ turns out to be a smooth map from $\mathbb{R}^n$ to the space $$\underbrace{\left( \mathbb{R}^n \right)^{*} \otimes \dots \otimes \left( \mathbb{R}^n \right)^{*}}_{k\text{ times}} \otimes \mathbb{R}^m $$ which, in your notation, would be a $(0,k)$ ($\mathbb{R}^m$-valued) tensor field on $\mathbb{R}^n$. If $m = 1$, this is just a $(0,k)$ tensor field. If $n = m$ (and then we can think of $f$ as a vector field on $\mathbb{R}^n$), the $k$-th derivative of $f$ is a $(1,k)$-tensor on $\mathbb{R}^n$. </p> Answer 38: <p>I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their <strong>covariant</strong> and <strong>contravariant</strong> nature.</p> <p>It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric. </p> <p>Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform <strong>accordingly</strong> or <strong>covariantly</strong>, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as <strong>indipendent</strong> and then said to transform <strong>contravariantly</strong> because when you apply a transformation their being immune to it makes them appear to transform the opposite way. </p> <p>So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the <em>vector space</em> $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that <strong>vectors were inside the vector space</strong> that transformed, while the <strong>point was something indipendent</strong> from the geometrical structure that was transformed.</p>
Show that $x^2+y^2+z^2=999$ has no integer solutions
2,114,429
10
https://math.stackexchange.com/questions/2114429/show-that-x2y2z2-999-has-no-integer-solutions
<blockquote> <p>The question is asking us to prove that $x^2+y^2+z^2=999$ has no integer solutions. </p> </blockquote> <p><strong>Attempt at a solution:</strong> </p> <p>So I've noticed that since 999 is odd, either one of the variables or all three of the variables must be odd. </p> <p>If I assume that only one variable is odd, I can label the variables like this: $$x=2k_1+1$$ $$y=2k_2$$ $$z=2k_3$$</p> <p>By substituting, and doing some algebra, I can conclude that $k_1^2+k_2^2+k_3^2+k_1=249.5$, which is not possible since all $k_i\in\Bbb Z$.</p> <p>If all three are odd, I can rename the variables like this: $$x=2k_1+1$$ $$y=2k_2+1$$ $$z=2k_3+1$$ Eventually I conclude that $k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249$, but I don't know where to go from there. </p> <p>An alternative I've considered is brute-forcing it, but I'd rather avoid that if I can. Any assistance here would be greatly appreciated. </p>
[ { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" }, { "answer_id": 2114448, "score": 10, "text": "<p>Using congruences . . .</p>\n\n<p>Odd squares are always $1 \\pmod 8$, hence also $1 \\pmod 4$.\n<p>\nEven squares are always $0 \\pmod 4$, hence either $0 \\text{ or } 4 \\pmod 8$.\n<p>\nSince $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd.\n<p>\nIf $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \\equiv 3 \\pmod 8$, contradiction, since $999 \\equiv 7 \\pmod 8$.\n<p>\nIf exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \\equiv 1 \\pmod 4$,\ncontradiction, since $999 \\equiv 3 \\pmod 4$.</p>\n" }, { "answer_id": 2114460, "score": 5, "text": "<p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p>\n\n<p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\\\\n(k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\\\\nk_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\\\\n$$\nand we have three even terms summing to an odd number, which cannot therefore exist.</p>\n" }, { "answer_id": 2114443, "score": 4, "text": "<p>This is impossible because the number is congruent to $-1\\bmod 8$.</p>\n\n<p>Notice that squares are only $1,4$ and $0\\bmod 8$.</p>\n\n<hr>\n\n<p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>\n" } ]
Question: Show that $x^2+y^2+z^2=999$ has no integer solutions Answer 1: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 2: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 3: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 4: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 5: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 6: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 7: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 8: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 9: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 10: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 11: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 12: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 13: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 14: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 15: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 16: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 17: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 18: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 19: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 20: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 21: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 22: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 23: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 24: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 25: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 26: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 27: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 28: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 29: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 30: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 31: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 32: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 33: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 34: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 35: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 36: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 37: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 38: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 39: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 40: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 41: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 42: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 43: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 44: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 45: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 46: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 47: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 48: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 49: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 50: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 51: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 52: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 53: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 54: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p> Answer 55: <p>Using congruences . . .</p> <p>Odd squares are always $1 \pmod 8$, hence also $1 \pmod 4$. <p> Even squares are always $0 \pmod 4$, hence either $0 \text{ or } 4 \pmod 8$. <p> Since $x^2 + y^2 + z^2$ is odd, either $x,y,z$ are all odd, or exactly one of $x,y,z$ is odd. <p> If $x,y,z$ are all odd, then $x^2 + y^2 + z^2 \equiv 3 \pmod 8$, contradiction, since $999 \equiv 7 \pmod 8$. <p> If exactly one of $x,y,z$ is odd, then $x^2 + y^2 + z^2 \equiv 1 \pmod 4$, contradiction, since $999 \equiv 3 \pmod 4$.</p> Answer 56: <p>My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point):</p> <p>$$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.</p> Answer 57: <p>This is impossible because the number is congruent to $-1\bmod 8$.</p> <p>Notice that squares are only $1,4$ and $0\bmod 8$.</p> <hr> <p>In fact there is a theorem by Legendre that say that a number is not the sum of three squares if and only if it is of the form $4^a(8b-1)$. (the other direction is the tough one).</p>
What counts as a &quot;known&quot; function
2,116,934
10
https://math.stackexchange.com/questions/2116934/what-counts-as-a-known-function
<p>This question arises from a discussion on <a href="https://math.stackexchange.com/questions/2116861/how-to-deal-with-the-integral-of-ex-ex2">How to deal with the integral of $e^x e^{x^2}$?</a> where the error function is invoked and there is a comment that this might not be a satisfactory solution (at least for some) because it is more a renaming exercise than a solution. Similarly for the Lambert W, which appears regularly.</p> <p>Now there are functions we all know and learn, like the trigonometric functions, exponential function and logarithm, which emerge reasonably naturally in various contexts early in our mathematical experience. Hyperbolic functions (sinh/cosh etc) are not so immediately used, but emerge through being the odd and even parts of the exponential function as well as in other ways.</p> <p>Then there are special functions of various kinds like Bessel Functions, Legendre Polynomials, Hypergeometric functions as well as those in the presenting discussion.</p> <p>It seems to me that whether we actually like particular functions or not - or consider them as solutions rather than renamings - might depend on the fluency and intuition we have developed in understanding them. It seems to me to be a bit like solving equations by radicals, which is a nice thing to do, because we know what radicals are - but practically we return to numerical estimates very often, so the radical gives an illusion of precision rather than a practical solution.</p> <p>So my question is approximately "what makes a function familiar enough to be a special function?"</p> <p>Now this could be too vague, or opinion-based, or more suitable for meta - but on the other hand I thought the discussion was interesting enough to highlight, and maybe others will have some useful insight.</p>
[ { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" }, { "answer_id": 2116945, "score": 8, "text": "<p>We use the term <a href=\"https://en.wikipedia.org/wiki/Elementary_function\"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p>\n\n<p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p>\n" }, { "answer_id": 2116984, "score": 8, "text": "<p>Here is my two cents on the topic:</p>\n\n<p>There are three distinct types of <em>\"known\"</em> functions. I will evaluate each case separately, and show how we build up our <em>\"set of functions\"</em></p>\n\n<hr>\n\n<p><strong>1. Elementary Functions</strong> </p>\n\n<p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\\sin(2x) - 5$ can be broken up into </p>\n\n<ul>\n<li>$\\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li>\n<li>$2x$, which is a mulitiplication composed into $\\sin(x)$</li>\n<li>$-5$, the addition of the additive inverse of $5$ the expression above. </li>\n</ul>\n\n<p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\\int \\frac{e^x}{x}\\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p>\n\n<hr>\n\n<p><strong>2. Special Functions</strong> </p>\n\n<p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p>\n\n<ul>\n<li><em>Addendum: We allow the functions created by the integration to be \"Special Functions\", but not the integration or differentiation operators themselves. See <a href=\"https://en.wikipedia.org/wiki/Closed-form_expression\">this page</a> for more details</em></li>\n</ul>\n\n<p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for \"Analytic Continuation\", i.e. extending the domains of the functions inside of our set. </p>\n\n<p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p>\n\n<ul>\n<li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li>\n<li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols \"$\\int \\frac{e^x}{x} \\mathrm{d}x$\" than to share information on \"$\\operatorname{Ei(x)}$\" or \"The Exponential Integral\". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li>\n</ul>\n\n<p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p>\n\n<hr>\n\n<p><strong>3. Other Functions</strong> </p>\n\n<p>What falls into this list is really subjective. In my book though, I generally include <a href=\"https://en.wikipedia.org/wiki/Power_series\">(generalized) power series</a>, <a href=\"https://en.wikipedia.org/wiki/Differintegral\">differintegrals</a>, and so forth. </p>\n\n<p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href=\"http://math.eretrandre.org/tetrationforum/attachment.php?aid=788\">this paper</a> that had a bit on the subject, creating a function $\\operatorname{Sphd}(x)$ to generalize the notion of the <a href=\"https://en.wikipedia.org/wiki/Sophomore&#39;s_dream\">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent \"Brevan's Function\", which I <em>defined</em> to be $\\operatorname{B}(x)=\\int x^x \\mathrm{d}x$. </p>\n\n<p>Now comes the important distinction here: <strong>If I study $\\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p>\n\n<hr>\n\n<p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>\n" } ]
Question: What counts as a &quot;known&quot; function Answer 1: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 2: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 3: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 4: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 5: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 6: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 7: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 8: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 9: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 10: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 11: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 12: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 13: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 14: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 15: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 16: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 17: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 18: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 19: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 20: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 21: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 22: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 23: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 24: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 25: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 26: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 27: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 28: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 29: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 30: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 31: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 32: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 33: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 34: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 35: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 36: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p> Answer 37: <p>We use the term <a href="https://en.wikipedia.org/wiki/Elementary_function"><em>elementary function</em></a> for any function that can be expressed in terms of a finite composition of polynomials, exponentials, and inverses of the same (roughly speaking). So it includes logs, and roots, trig and hyperbolic by combining exponentials (and passing to complex numbers), and inverses of those, etc. The way I explain it to students is, “If you can evaluate it on a standard scientific calculator, it's an elementary function.” I second the recommendation of Brian's article.</p> <p>This is a pretty good place to draw the line if you're teaching an undergrad calculus course. Other functions can be “admitted” when their need comes up in more advanced courses.</p> Answer 38: <p>Here is my two cents on the topic:</p> <p>There are three distinct types of <em>"known"</em> functions. I will evaluate each case separately, and show how we build up our <em>"set of functions"</em></p> <hr> <p><strong>1. Elementary Functions</strong> </p> <p>The elementary functions are given a special category because they share special properties <em>(*cough* mumbles something about differential field *cough*)</em> Fundamentally, an elementary functions is anything involving addition, multiplication, and exponentiation, along with the inverses (when working in $\mathbb{C}$). We can (finitely) compose these operations to get new elementary functions - for example, $\sin(2x) - 5$ can be broken up into </p> <ul> <li>$\sin(x)$, which is defined as the imaginary part of a complex exponential involving multiplication</li> <li>$2x$, which is a mulitiplication composed into $\sin(x)$</li> <li>$-5$, the addition of the additive inverse of $5$ the expression above. </li> </ul> <p>Once we start messing with compositions or operators outside of this list we encounter issues. A simple is example is $\int \frac{e^x}{x}\mathrm{d}x$, which is not in our list. Finding a way to include this into our list of functions, we get:</p> <hr> <p><strong>2. Special Functions</strong> </p> <p>The <em>special functions</em> are a way to expand our set of elementary functions to include other commonly needed functions. One simple example is the integration above; we often term this function $\operatorname{Ei}(x)$, known as the <em>Exponential Integral</em>. We thus clearly see that integration and differentiation operators are not in our set of elementary operations; for most intents and purposes we really want these operations, and so extending our set of function seems natural. </p> <ul> <li><em>Addendum: We allow the functions created by the integration to be "Special Functions", but not the integration or differentiation operators themselves. See <a href="https://en.wikipedia.org/wiki/Closed-form_expression">this page</a> for more details</em></li> </ul> <p>Another common example is the Gamma Function. We all know that the factorial function $n!$ is elementary; it is defined purely by multiplication (or division if you want to easily justify $0!$). Special Functions are thus inherently tied to the desire for "Analytic Continuation", i.e. extending the domains of the functions inside of our set. </p> <p>It now seems like we can just keep expanding our set of functions to be larger and larger, and this is true; however, for there are a number of reasons for having finitely many special functions. </p> <ul> <li>One largely philosophical reason is simply that humans live for finite time and can accomplish only finitely many tasks, while there are infinitely many functions. </li> <li>A more grounded reason is that many common tasks involve the same functions, and so we study these functions more in depth and know a lot of properties about them. This allows researchers to share knowledge of the functions more easily; for example, it is a lot harder to share information on the set of symbols "$\int \frac{e^x}{x} \mathrm{d}x$" than to share information on "$\operatorname{Ei(x)}$" or "The Exponential Integral". <em>If you don't believe me, you haven't ever tried to find information online, e.g. at Wolfram MathWorld</em></li> </ul> <p>Nevertheless, there are simply going to be problems we desire to solve that involve functions we haven't included yet in our set. One concrete example is the integral of $x^x$ - What do we do with this? We can expand our list of functions to include (some):</p> <hr> <p><strong>3. Other Functions</strong> </p> <p>What falls into this list is really subjective. In my book though, I generally include <a href="https://en.wikipedia.org/wiki/Power_series">(generalized) power series</a>, <a href="https://en.wikipedia.org/wiki/Differintegral">differintegrals</a>, and so forth. </p> <p>Also included in this category is what you need in the moment. For example, at one point in my research I <em>needed</em> a way to compactly express the integral of $x^x$. After a little research I found <a href="http://math.eretrandre.org/tetrationforum/attachment.php?aid=788">this paper</a> that had a bit on the subject, creating a function $\operatorname{Sphd}(x)$ to generalize the notion of the <a href="https://en.wikipedia.org/wiki/Sophomore&#39;s_dream">Sophomore's Dream</a>. This helped me a bit, but what I actually did was just invent "Brevan's Function", which I <em>defined</em> to be $\operatorname{B}(x)=\int x^x \mathrm{d}x$. </p> <p>Now comes the important distinction here: <strong>If I study $\operatorname{B}(x)$ enough it could become a special function</strong>. Creating a new special function generally entails having discovered and published a lot of details on the function, <strong>as well as having other people use the function.</strong></p> <hr> <p><em>Note: I feel that this is currently a fairly week analysis, especially the last section. I will update, expand, and correct this post when I have time.</em></p>
Proving $\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x = -\frac{31\pi^6}{252}$
2,116,974
10
https://math.stackexchange.com/questions/2116974/proving-int-01-frac-lnx51x-mathrmdx-frac31-pi6252
<p>I would like to show the following identity:</p> <blockquote> <p>$$\boxed{ I := \int_0^1 \dfrac{(\ln(x))^5}{1+x} \mathrm{d}x = -\dfrac{31\pi^6}{252} }$$</p> </blockquote> <hr> <p>Here is what I tried. The change of variables $u=1/x$ yields $$I= \int_1^{\infty} \dfrac{(\ln(x))^5}{1+1/u} \dfrac{1}{u^2} \mathrm{d}u = \int_1^{\infty} \dfrac{(\ln(x))^5}{u^2+u} \mathrm{d}u$$ Then $z=u-1$ gives $$I = \int_{0}^{\infty} \dfrac{(\ln(z+1))^5}{z^2+3z+2} \mathrm{d}z $$ with $z^2+3z+2=(z+1)(z+2)$. I wanted to use contour integration like <a href="https://math.stackexchange.com/questions/2027339">here</a>, but I was not sure how to proceed in this case. Anyway, the computations of the residues (of which "well-chosen" function? Maybe something like <a href="http://www.wolframalpha.com/input/?i=residues%20ln(z%2B1)%5E6%20%2F%20(z%5E2%2B3z%2B2)" rel="nofollow noreferrer">this</a>?) seem to be difficult.</p> <p>I believe that we can generalize to $\frac{(\ln(x))^n}{1+x}$, or maybe even more (e.g. $\frac{(\ln(x))^n}{ax^2+bx+c}$). Related computations are: <a href="https://math.stackexchange.com/questions/1976510">(1)</a>, <a href="https://math.stackexchange.com/questions/316745">(2)</a>, <a href="https://math.stackexchange.com/questions/1854854">(3)</a>, <a href="https://math.stackexchange.com/questions/1186002">(4)</a>.</p> <p>Thank you for your detailed help.</p>
[ { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" }, { "answer_id": 2116986, "score": 4, "text": "<p>HINT:</p>\n\n<p>Enforce the substitution $x\\to e^{-x}$, expand the resulting denominator in a geometric series of $\\sum_{n=0}^{\\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\\zeta(6)$.</p>\n\n<p>Alternatively, note that</p>\n\n<p>$$\\int_0^1 \\frac{\\log^5(x)}{1+x}\\,dx=\\left. \\left(\\frac{d^5}{da^5}\\int_0^1\\frac{x^a}{1+x}\\,dx\\right)\\right|_{a=0}$$</p>\n" }, { "answer_id": 3890867, "score": 2, "text": "<p>Let <span class=\"math-container\">$\\Re(s)&gt;0$</span> and\n<span class=\"math-container\">$$J(s)=\\int_0^1\\frac{\\ln^sx}{1+x}dx.$$</span>\nLetting <span class=\"math-container\">$x=e^{-u}$</span>,\n<span class=\"math-container\">$$\\begin{align}\nJ(s)&amp;=(-1)^s\\int_0^\\infty\\frac{u^se^{-u}}{1+e^{-u}}du\\\\\n&amp;=(-1)^s\\int_0^\\infty u^s\\sum_{n\\ge1}(-1)^ne^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}(-1)^n\\int_0^\\infty u^se^{-nu}du\\\\\n&amp;=(-1)^s\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\int_0^\\infty u^se^{-u}du\\\\\n&amp;=(-1)^s\\Gamma(s+1)\\sum_{n\\ge1}\\frac{(-1)^n}{n^{s+1}}\\\\\n&amp;=(-1)^s(2^{-s}-1)\\Gamma(s+1)\\zeta(s+1).\n\\end{align}$$</span></p>\n" }, { "answer_id": 3891148, "score": 1, "text": "<p><span class=\"math-container\">\\begin{align}J&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1+x} \\mathrm{d}x \\\\\n&amp;=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\int_0^1 \\frac{2u(\\ln(u))^5}{1-u^2} \\mathrm{d}u\\\\\n&amp;\\overset{x=u^2}=\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x-\\frac{1}{2^5}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=\\frac{31}{32}\\int_0^1 \\frac{(\\ln(x))^5}{1-x} \\mathrm{d}x\\\\\n&amp;=-\\frac{31}{32}\\times \\frac{5!}{945}\\times \\zeta(6)\\\\\n&amp;=\\boxed{-\\frac{31\\pi^6}{252}}\n\\end{align}</span>\nI assume: <span class=\"math-container\">$\\displaystyle \\int_0^1 \\frac{\\ln^5 x}{1-x}dx=-5!\\zeta(6)$</span> and <span class=\"math-container\">$\\displaystyle\\zeta(6)=\\frac{\\pi^6}{945}$</span></p>\n" }, { "answer_id": 3891182, "score": 0, "text": "<p>Use a geometric series.\n<span class=\"math-container\">$$\\frac{1}{1+x}=\\sum_{k=0}^\\infty(-1)^kx^k$$</span>\nSo\n<span class=\"math-container\">$$\\mathcal{I}_n=\\int_0^1\\frac{\\ln(x)^n}{1+x}\\mathrm{d}x=\\sum_{m=0}^\\infty\\left[(-1)^mI_{m,n}\\right]$$</span>\nWhere\n<span class=\"math-container\">$$I_{m,n}=\\int_0^1 x^m\\ln(x)^n\\mathrm{d}x$$</span>\nNow use a substitution <span class=\"math-container\">$x=e^{-t}$</span> which changes our region of integration to <span class=\"math-container\">$(\\infty,0]$</span>.\n<span class=\"math-container\">$$I_{m,n}=\\int_\\infty^0(e^{-t})^m(-t)^n(-e^{-t})\\mathrm{d}t$$</span>\n<span class=\"math-container\">$$=(-1)^n\\int_0^\\infty t^ne^{-t(m+1)}\\mathrm{d}t$$</span>\nNow use another substitution <span class=\"math-container\">$z=t(m+1)$</span>:\n<span class=\"math-container\">$$I_{m,n}=(-1)^n\\int_0^\\infty\\left(\\frac{z}{m+1}\\right)^ne^{-z}\\frac{\\mathrm{d}z}{m+1}$$</span>\n<span class=\"math-container\">$$=\\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span>\nSo then\n<span class=\"math-container\">$$\\mathcal{I}_n=(-1)^nn!\\sum_{m=0}^\\infty\\frac{(-1)^m}{(m+1)^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^nn!\\sum_{m=1}^\\infty\\frac{(-1)^{m-1}}{m^{n+1}}$$</span>\n<span class=\"math-container\">$$=(-1)^n~n!~\\eta(n+1)$$</span>\nUsing the <a href=\"https://mathworld.wolfram.com/DirichletEtaFunction.html\" rel=\"nofollow noreferrer\">Dirichlet eta function</a>. In the case of <span class=\"math-container\">$n=5$</span> we get\n<span class=\"math-container\">$$\\mathcal{I}_5=-5!\\cdot\\eta(6)=\\frac{-31\\pi^6}{252}$$</span></p>\n<p>Which is confirmed numerically by Mathematica:\n<a href=\"https://i.stack.imgur.com/WRFyT.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/WRFyT.png\" alt=\"nintegrate\" /></a></p>\n" } ]
Question: Proving $\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x = -\frac{31\pi^6}{252}$ Answer 1: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 2: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 3: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 4: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 5: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 6: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 7: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 8: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 9: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 10: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 11: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 12: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 13: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 14: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 15: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 16: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 17: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 18: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 19: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 20: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 21: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 22: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 23: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 24: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 25: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 26: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 27: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 28: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 29: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 30: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 31: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 32: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 33: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 34: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 35: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 36: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 37: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 38: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 39: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 40: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 41: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 42: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 43: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 44: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 45: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 46: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 47: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 48: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 49: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 50: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 51: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 52: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 53: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 54: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 55: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 56: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 57: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 58: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 59: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 60: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 61: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 62: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 63: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 64: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 65: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 66: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 67: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 68: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 69: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 70: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 71: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 72: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p> Answer 73: <p>HINT:</p> <p>Enforce the substitution $x\to e^{-x}$, expand the resulting denominator in a geometric series of $\sum_{n=0}^{\infty}(-1)^ne^{-nx}$, interchange the sum and integral, carryout the integral by either successive IBP or differentiating under the integral, and evaluate the resulting series representation of $\zeta(6)$.</p> <p>Alternatively, note that</p> <p>$$\int_0^1 \frac{\log^5(x)}{1+x}\,dx=\left. \left(\frac{d^5}{da^5}\int_0^1\frac{x^a}{1+x}\,dx\right)\right|_{a=0}$$</p> Answer 74: <p>Let <span class="math-container">$\Re(s)&gt;0$</span> and <span class="math-container">$$J(s)=\int_0^1\frac{\ln^sx}{1+x}dx.$$</span> Letting <span class="math-container">$x=e^{-u}$</span>, <span class="math-container">$$\begin{align} J(s)&amp;=(-1)^s\int_0^\infty\frac{u^se^{-u}}{1+e^{-u}}du\\ &amp;=(-1)^s\int_0^\infty u^s\sum_{n\ge1}(-1)^ne^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}(-1)^n\int_0^\infty u^se^{-nu}du\\ &amp;=(-1)^s\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\int_0^\infty u^se^{-u}du\\ &amp;=(-1)^s\Gamma(s+1)\sum_{n\ge1}\frac{(-1)^n}{n^{s+1}}\\ &amp;=(-1)^s(2^{-s}-1)\Gamma(s+1)\zeta(s+1). \end{align}$$</span></p> Answer 75: <p><span class="math-container">\begin{align}J&amp;=\int_0^1 \frac{(\ln(x))^5}{1+x} \mathrm{d}x \\ &amp;=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\int_0^1 \frac{2u(\ln(u))^5}{1-u^2} \mathrm{d}u\\ &amp;\overset{x=u^2}=\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x-\frac{1}{2^5}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=\frac{31}{32}\int_0^1 \frac{(\ln(x))^5}{1-x} \mathrm{d}x\\ &amp;=-\frac{31}{32}\times \frac{5!}{945}\times \zeta(6)\\ &amp;=\boxed{-\frac{31\pi^6}{252}} \end{align}</span> I assume: <span class="math-container">$\displaystyle \int_0^1 \frac{\ln^5 x}{1-x}dx=-5!\zeta(6)$</span> and <span class="math-container">$\displaystyle\zeta(6)=\frac{\pi^6}{945}$</span></p> Answer 76: <p>Use a geometric series. <span class="math-container">$$\frac{1}{1+x}=\sum_{k=0}^\infty(-1)^kx^k$$</span> So <span class="math-container">$$\mathcal{I}_n=\int_0^1\frac{\ln(x)^n}{1+x}\mathrm{d}x=\sum_{m=0}^\infty\left[(-1)^mI_{m,n}\right]$$</span> Where <span class="math-container">$$I_{m,n}=\int_0^1 x^m\ln(x)^n\mathrm{d}x$$</span> Now use a substitution <span class="math-container">$x=e^{-t}$</span> which changes our region of integration to <span class="math-container">$(\infty,0]$</span>. <span class="math-container">$$I_{m,n}=\int_\infty^0(e^{-t})^m(-t)^n(-e^{-t})\mathrm{d}t$$</span> <span class="math-container">$$=(-1)^n\int_0^\infty t^ne^{-t(m+1)}\mathrm{d}t$$</span> Now use another substitution <span class="math-container">$z=t(m+1)$</span>: <span class="math-container">$$I_{m,n}=(-1)^n\int_0^\infty\left(\frac{z}{m+1}\right)^ne^{-z}\frac{\mathrm{d}z}{m+1}$$</span> <span class="math-container">$$=\frac{(-1)^n ~n!}{(m+1)^{n+1}}$$</span> So then <span class="math-container">$$\mathcal{I}_n=(-1)^nn!\sum_{m=0}^\infty\frac{(-1)^m}{(m+1)^{n+1}}$$</span> <span class="math-container">$$=(-1)^nn!\sum_{m=1}^\infty\frac{(-1)^{m-1}}{m^{n+1}}$$</span> <span class="math-container">$$=(-1)^n~n!~\eta(n+1)$$</span> Using the <a href="https://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet eta function</a>. In the case of <span class="math-container">$n=5$</span> we get <span class="math-container">$$\mathcal{I}_5=-5!\cdot\eta(6)=\frac{-31\pi^6}{252}$$</span></p> <p>Which is confirmed numerically by Mathematica: <a href="https://i.stack.imgur.com/WRFyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRFyT.png" alt="nintegrate" /></a></p>
Find all prime solutions of equation $5x^2-7x+1=y^2.$
2,117,054
10
https://math.stackexchange.com/questions/2117054/find-all-prime-solutions-of-equation-5x2-7x1-y2
<p>Find all prime solutions of the equation $5x^2-7x+1=y^2.$</p> <p>It is easy to see that $y^2+2x^2=1 \mod 7.$ Since $\mod 7$-residues are $1,2,4$ it follows that $y^2=4 \mod 7$, $x^2=2 \mod 7$ or $y=2,5 \mod 7$ and $x=3,4 \mod 7.$ </p> <p>In the same way from $y^2+2x=1 \mod 5$ we have that $y^2=1 \mod 5$ and $x=0 \mod 5$ or $y^2=-1 \mod 5$ and $x=4 \mod 5.$</p> <p>How put together the two cases?</p> <p>Computer find two prime solutions $(3,5)$ and $(11,23).$</p>
[ { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" }, { "answer_id": 2117164, "score": 3, "text": "<p>Since <span class=\"math-container\">$(0,1)$</span> is a solution to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>. That will give us:</p>\n<p><span class=\"math-container\">$$x:=\\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>\nand\n<span class=\"math-container\">$$y:=\\frac{a}{b}x+1$$</span></p>\n<p>where <span class=\"math-container\">$a,b\\in \\mathbb{Z}$</span>, <span class=\"math-container\">$b\\neq 0$</span>.</p>\n<p>Since <span class=\"math-container\">$x$</span> is prime, it follows that either <span class=\"math-container\">$b=1$</span> or <span class=\"math-container\">$x=b$</span>.</p>\n<ul>\n<li><p><em><strong>case <span class=\"math-container\">$b=1$</span></strong></em></p>\n<p>This gives us <span class=\"math-container\">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class=\"math-container\">$x\\in \\mathbb{Z}$</span>,\nwe get <span class=\"math-container\">$a=\\pm 2$</span>, and then <span class=\"math-container\">$x=11$</span> or <span class=\"math-container\">$x=3$</span>.\nThose will give <span class=\"math-container\">$y=23$</span> or <span class=\"math-container\">$y=5$</span>, repectively.</p>\n</li>\n<li><p><strong>case <span class=\"math-container\">$x=b$</span></strong>.</p>\n</li>\n</ul>\n<p>We have that <span class=\"math-container\">$\\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>\n<p><span class=\"math-container\">$$(*) \\hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>\n<p>Since <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are prime, and <span class=\"math-container\">$x=2$</span> or <span class=\"math-container\">$y=2$</span> do not give solutions to <span class=\"math-container\">$5x^2-7x+1=y^2$</span>, we conclude that <span class=\"math-container\">$y=a+1$</span> and <span class=\"math-container\">$x=b$</span> are ODD primes. In particular, <span class=\"math-container\">$a(a + 2)\\equiv 0 \\mod 4$</span>.\nNow reducing (*) <span class=\"math-container\">$\\mod 4$</span>, contradicts the fact that <span class=\"math-container\">$b$</span> is odd. Thus, the case <span class=\"math-container\">$x=b$</span> does not occur.</p>\n<p>Therefore <span class=\"math-container\">$(x,y)= (11,23)$</span> and <span class=\"math-container\">$(x,y)=(3,5)$</span> are the only solutions\nwhere both coordinates are prime numbers.</p>\n" }, { "answer_id": 2117145, "score": 2, "text": "<p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \\pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so\n$$x^2, y^2 = 1 \\text{ mod } 24.$$\nSubstituting in the equation gives $$x = 24k+11 $$ for some integer $k$.\nRearranging the original equation we get\n$$x(5x-7)=(y-1)(y+1), \\tag{1}$$\ntherefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>\n\n<p>Solving for $x$ gives\n $$ x = \\frac{17}{10} + \\frac{1}{10}\\sqrt{20y^2+29}&gt;\\frac{1}{3}(y+1).$$\nNote that $x$ is odd and $y \\pm 1$ is even, so $x \\ne y\\pm1$. This forces $x = \\frac{1}{2} (y \\pm1)$, or \n$$y = 2x \\pm 1 = 48k + 22 \\pm 1 \\Rightarrow y = 48k+23.$$\n$48k+21$ is rejected being divisible by 3.\nPlugging these in $(1)$ gives the solution $k=0$ or\n$$x = 11, \\space y = 23.$$</p>\n" }, { "answer_id": 2117180, "score": 2, "text": "<p>Completing the square and dividing by $5$, we have </p>\n\n<p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p>\n\n<p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p>\n\n<p>$$ z^2 - 5 w^2 = 29$$</p>\n\n<p>The positive integer solutions of this can be written as</p>\n\n<p>$$\\pmatrix{z\\cr w\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr} \\ \\text{or}\\ \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nfor nonnegative integers $n$.</p>\n\n<p>Now you want $w$ to be even and $z \\equiv 3 \\mod 10$. All the solutions will have $w$ even, and $z$ altermately $\\equiv \\pm 3 \\mod 10$. Thus for $n$ even you get integers for $x,y$ with\n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{23\\cr 10\\cr}$$\nand for $n$ odd, \n$$ \\pmatrix{z_n\\cr w_n\\cr} = \\pmatrix{9 &amp; 20\\cr 4 &amp; 9\\cr}^n \\pmatrix{7\\cr 2\\cr}$$\nYou do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$)\nand $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p>\n\n<ul>\n<li>$x_n \\equiv 0 \\mod 3$ for $n \\equiv 0$ or $3 \\mod 4$. </li>\n<li>$x_n \\equiv 0 \\mod 17$ for $n \\equiv 2$ or $3 \\mod 6$. </li>\n<li>$x_n \\equiv 0 \\mod 5$ or $y_n \\equiv 0 \\mod 5$ for $n \\equiv 0,3, 4, 5, 6, 9 \\mod 10$. </li>\n<li>$x_n \\equiv 0 \\mod 11$ for $n \\equiv 1, 8 \\mod 10$.</li>\n<li>$x_n \\equiv 0 \\mod 13$ for $n \\equiv 3, 5, 6, 7, 8, 10 \\mod 14$.</li>\n<li>$y_n \\equiv 0 \\mod 23$ for $n \\equiv 1, 2, 5, 6 \\mod 8$.</li>\n</ul>\n\n<p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p>\n" }, { "answer_id": 2117173, "score": 1, "text": "<p>Can we just charge straight at this?</p>\n\n<p>$y$ is odd. $x=2 \\ (\\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>\n\n<p>$x(5x-7) = (y-1)(y+1)$, so $x \\mid (y-1) $ or $x \\mid (y+1)$ ($x$ is prime)\n so $kx=y\\pm1$, $k$ even</p>\n\n<p>$k\\ge4$ is too large: $(kx\\pm1)^2\\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\\frac 12(y\\pm1)$, makes the equality feasible.</p>\n\n<p>Considering the two cases: </p>\n\n<ul>\n<li><p><strong>(1)</strong> $x=\\frac 12(y+1)$, $y=2x-1$:<br>\n$x(5x-7) = 4x(x-1) \\implies x = 3, y=5$</p></li>\n<li><p><strong>(2)</strong> $x=\\frac 12(y-1)$, $y=2x+1$:<br>\n$x(5x-7) = 4x(x+1) \\implies x = 11, y=23$</p></li>\n</ul>\n\n<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>\n" }, { "answer_id": 2117134, "score": 0, "text": "<p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>\n\n<p>In Mathematica,</p>\n\n<pre><code>i=1;\nans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers];\nans2=ans/.C[1]-&gt;i//RootReduce\nDynamic@i\nDynamic[IntegerLength@ans2[[All,All,2]]]\n</code></pre>\n\n<p>and then run</p>\n\n<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce]\n</code></pre>\n\n<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>\n\n<pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}}\n</code></pre>\n\n<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>\n" } ]
Question: Find all prime solutions of equation $5x^2-7x+1=y^2.$ Answer 1: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 2: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 3: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 4: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 5: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 6: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 7: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 8: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 9: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 10: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 11: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 12: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 13: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 14: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 15: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 16: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 17: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 18: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 19: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 20: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 21: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 22: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 23: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 24: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 25: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 26: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 27: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 28: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 29: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 30: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 31: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 32: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 33: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 34: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 35: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 36: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 37: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 38: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 39: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 40: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 41: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 42: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 43: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 44: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 45: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 46: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 47: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 48: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 49: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 50: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 51: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 52: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 53: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 54: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 55: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 56: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 57: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 58: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 59: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 60: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 61: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 62: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 63: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 64: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 65: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 66: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 67: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 68: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 69: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 70: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 71: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 72: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 73: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 74: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 75: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 76: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 77: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 78: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 79: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 80: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 81: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 82: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 83: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 84: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 85: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 86: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 87: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 88: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 89: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 90: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p> Answer 91: <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p> <p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span> and <span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p> <p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p> <p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p> <ul> <li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p> <p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>, we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>. Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p> </li> <li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p> </li> </ul> <p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p> <p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p> <p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>. Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p> <p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions where both coordinates are prime numbers.</p> Answer 92: <p>Try working mod $3$ and mod $8$. Assuming $x, y&gt;3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so $$x^2, y^2 = 1 \text{ mod } 24.$$ Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$. Rearranging the original equation we get $$x(5x-7)=(y-1)(y+1), \tag{1}$$ therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p> <p>Solving for $x$ gives $$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}&gt;\frac{1}{3}(y+1).$$ Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or $$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$ $48k+21$ is rejected being divisible by 3. Plugging these in $(1)$ gives the solution $k=0$ or $$x = 11, \space y = 23.$$</p> Answer 93: <p>Completing the square and dividing by $5$, we have </p> <p>$$ (10 x - 7)^2 - 20 y^2 = 29$$</p> <p>Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation</p> <p>$$ z^2 - 5 w^2 = 29$$</p> <p>The positive integer solutions of this can be written as</p> <p>$$\pmatrix{z\cr w\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$.</p> <p>Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 &amp; 20\cr 4 &amp; 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general,</p> <ul> <li>$x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. </li> <li>$x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. </li> <li>$x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. </li> <li>$x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$.</li> <li>$x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$.</li> <li>$y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$.</li> </ul> <p>And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions. </p> Answer 94: <p>Can we just charge straight at this?</p> <p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p> <p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even</p> <p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $&gt;5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p> <p>Considering the two cases: </p> <ul> <li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br> $x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li> <li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br> $x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li> </ul> <p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p> Answer 95: <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p> <p>In Mathematica,</p> <pre><code>i=1; ans=Solve[5 x ^2-7x+1==y^2&amp;&amp;x&gt;0&amp;&amp;y&gt;0,{x,y},Integers]; ans2=ans/.C[1]-&gt;i//RootReduce Dynamic@i Dynamic[IntegerLength@ans2[[All,All,2]]] </code></pre> <p>and then run</p> <pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]-&gt;i//RootReduce] </code></pre> <p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p> <pre><code>{{x-&gt;3,y-&gt;5},{x-&gt;11,y-&gt;23}} </code></pre> <p>but there appear to be no easily found solutions beyond <code>i=1</code></p>
Probability that the difference of the max and min of three random numbers between 0 and 2 is less than 1/4?
2,078,987
9
https://math.stackexchange.com/questions/2078987/probability-that-the-difference-of-the-max-and-min-of-three-random-numbers-betwe
<p>Three numbers are chosen at random between 0 and 2. What is the probability that the difference between the largest number and the least number is less than 1/4?</p> <p>This is a problem from JHMT Calculus 2011. I'm trying to see if there are any better solutions than the one provided.</p> <p>Here is the solution provided: <a href="https://i.stack.imgur.com/NCXWx.png"><img src="https://i.stack.imgur.com/NCXWx.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/nI5KW.png"><img src="https://i.stack.imgur.com/nI5KW.png" alt="enter image description here"></a></p>
[ { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" }, { "answer_id": 2079023, "score": 2, "text": "<p>Basically along the same lines as provided answer. </p>\n\n<p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\\frac 18$. </p>\n\n<p>More generally, suppose that $X_1,\\dots,X_n$ are IID $\\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\\alpha$. The answer then is </p>\n\n<p>$$ n \\times \\int_0^1 P( \\bigcap_{j=2}^n \\{t&lt;X_j &lt; t+\\alpha\\}|X_1=t) dt = n \\times \\int_0^1 \\min (\\alpha,1-t)^{n-1} dt.$$ </p>\n\n<p>The expression on the right is </p>\n\n<p>$$ n \\times \\left ( \\alpha^{n-1}\\times (1-\\alpha) + \\int_0^\\alpha u^{n-1} du\\right)=n \\alpha^{n-1} (1-\\alpha) + \\alpha^n = n \\alpha^{n-1} -(n-1) \\alpha^n. $$ </p>\n\n<p>In the case $n=3$ and $\\alpha=\\frac 18$, this formula gives $\\frac{11}{256}$. </p>\n" }, { "answer_id": 2079074, "score": 0, "text": "<p>Given: parent random variable $X \\sim \\text{Uniform}(0,2)$ with pdf $f(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bF4SE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bF4SE.png\" alt=\"enter image description here\"></a></p>\n\n<p><strong>Order Statistics</strong></p>\n\n<p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\\text{st}}$ order statistic) and sample maximum (the $n^{\\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p>\n\n<p><a href=\"https://i.stack.imgur.com/ET8JB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ET8JB.png\" alt=\"enter image description here\"></a></p>\n\n<p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\\text{nd}}$ and $n^{\\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p>\n\n<p>In this instance, we seek $P(X_n - X_1 &lt; \\frac14)$: </p>\n\n<p><a href=\"https://i.stack.imgur.com/pdkaD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pdkaD.png\" alt=\"enter image description here\"></a></p>\n\n<p>When the sample size $n = 3$, this is equal to $\\frac{11}{256}$.</p>\n\n<p><strong>Notes</strong></p>\n\n<ol>\n<li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li>\n</ol>\n" } ]
Question: Probability that the difference of the max and min of three random numbers between 0 and 2 is less than 1/4? Answer 1: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 2: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 3: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 4: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 5: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 6: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 7: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 8: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 9: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 10: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 11: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 12: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 13: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 14: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 15: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 16: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 17: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 18: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 19: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 20: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 21: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 22: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 23: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 24: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 25: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 26: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 27: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 28: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 29: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 30: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 31: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 32: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 33: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 34: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 35: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 36: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol> Answer 37: <p>Basically along the same lines as provided answer. </p> <p>First note that the answer won't change if we assume $X_1,X_2,X_3$ are IID $\mbox{U}[0,1]$, and we're asking what is the probability that the difference between the maximum and the minimum is at most $\frac 18$. </p> <p>More generally, suppose that $X_1,\dots,X_n$ are IID $\mbox{U}[0,1]$, and we're asked for the probability that the difference between the maximum and minimum is at most $\alpha$. The answer then is </p> <p>$$ n \times \int_0^1 P( \bigcap_{j=2}^n \{t&lt;X_j &lt; t+\alpha\}|X_1=t) dt = n \times \int_0^1 \min (\alpha,1-t)^{n-1} dt.$$ </p> <p>The expression on the right is </p> <p>$$ n \times \left ( \alpha^{n-1}\times (1-\alpha) + \int_0^\alpha u^{n-1} du\right)=n \alpha^{n-1} (1-\alpha) + \alpha^n = n \alpha^{n-1} -(n-1) \alpha^n. $$ </p> <p>In the case $n=3$ and $\alpha=\frac 18$, this formula gives $\frac{11}{256}$. </p> Answer 38: <p>Given: parent random variable $X \sim \text{Uniform}(0,2)$ with pdf $f(x)$:</p> <p><a href="https://i.stack.imgur.com/bF4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF4SE.png" alt="enter image description here"></a></p> <p><strong>Order Statistics</strong></p> <p>A more general framework for solving such problems is to use order statistics. In particular, given a random sample of size $n$ drawn on parent $X$, the joint pdf of the sample minimum (the $1^{\text{st}}$ order statistic) and sample maximum (the $n^{\text{th}}$ order statistic), say $g(x_1, x_n)$ is:</p> <p><a href="https://i.stack.imgur.com/ET8JB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET8JB.png" alt="enter image description here"></a></p> <p>where I am using the <code>OrderStat</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to automate the mechanics. We could just as easily find the joint pdf of the $2^{\text{nd}}$ and $n^{\text{th}}$ order statistics in the same way, or indeed any combination of 3 or more such ordered variables.</p> <p>In this instance, we seek $P(X_n - X_1 &lt; \frac14)$: </p> <p><a href="https://i.stack.imgur.com/pdkaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pdkaD.png" alt="enter image description here"></a></p> <p>When the sample size $n = 3$, this is equal to $\frac{11}{256}$.</p> <p><strong>Notes</strong></p> <ol> <li>The <code>Prob</code> function used above is also from <em>mathStatica</em>. As disclosure, I should add that I am one of the authors.</li> </ol>
Rates and Ratio work problem
2,079,003
9
https://math.stackexchange.com/questions/2079003/rates-and-ratio-work-problem
<p>I've encountered this problem</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls.</p> </blockquote> <p>The answer to this one is $84$ minutes. However, How did it come up to this answer? can someone explain to me why?</p> <p>Can anyone give a step by step method on how did we lead to this/solve this kind of problems?</p>
[ { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" }, { "answer_id": 2079008, "score": 8, "text": "<p>You are given that:</p>\n\n<blockquote>\n <p>It takes 60 minutes for 7 people to paint 5 walls.</p>\n</blockquote>\n\n<p><strong>Consider:</strong> Rate $\\times$ Time $=$ Output.</p>\n\n<p>Let us consider \"Rate\" to mean the rate at which <strong>one person</strong> works.</p>\n\n<p>And rather than writing out the whole word, let us just write $R$.</p>\n\n<p>In the given: $7R \\times 60$ minutes $=$ $5$ walls.</p>\n\n<p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p>\n\n<p>Dividing both sides by $7$, we get $R = 1/84$.</p>\n\n<p>Then you ask:</p>\n\n<blockquote>\n <p>How many minutes does it take 10 people to paint 10 walls?</p>\n</blockquote>\n\n<p>Set up similarly, we have: $10R \\times t$ minutes $= 10$ walls.</p>\n\n<p>Dividing both sides by $10$ and omitting units, we have: $R \\times t = 1$.</p>\n\n<p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p>\n\n<p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p>\n\n<p>It takes <strong>eighty four minutes</strong>.</p>\n\n<hr>\n\n<p>Here is an alternative approach, just for fun. Omitting units throughout:</p>\n\n<p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p>\n\n<p><strong>Note:</strong> Rate $=$ Output $\\div$ Time.</p>\n\n<p>In scenario one, the rate of one person working is: $5/(60\\cdot 7).$</p>\n\n<p>In scenario two, the rate of one person working is: $10/(t\\cdot 10).$</p>\n\n<p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p>\n\n<p>$$\\frac{5}{60 \\cdot 7} = \\frac{10}{t \\cdot 10} \\implies \\frac{1}{12 \\cdot 7} = \\frac{1}{t}$$</p>\n\n<p>Equating denominators (or \"cross multiplying\") we find $t = 12 \\cdot 7 = 84$.</p>\n" }, { "answer_id": 2079015, "score": 6, "text": "<p>There is a formula you may find useful:</p>\n\n<p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then\n$$\\boxed{\\frac{X_1T_1}{Y_1}\\ =\\ \\frac{X_2T_2}{Y_2}}$$\nIn this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p>\n\n<p>Proof of the formula:</p>\n\n<p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence\n$$Y_2\\ =\\ \\dfrac{X_2Y_1T_2}{X_1T_1}$$\nwhich can be rearranged to the formula above.</p>\n" }, { "answer_id": 2079004, "score": 2, "text": "<p>Here's a similar problem. You can look at this then work out yours.</p>\n\n<p><strong>Question:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p>\n\n<p><strong>Solution:</strong>\nIt takes 42 minutes for 7 people to paint 6 walls\n42/6=7 minutes per wall</p>\n\n<p>It takes 7 people 7 minutes to paint 1 wall</p>\n\n<p>Each person paints 1/7 of the wall in 7 minutes</p>\n\n<p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p>\n\n<p>8 people paint 8/49 of 1 wall in 1 minute</p>\n\n<p>How many minutes does it take 8 people to paint 1 wall ?\n49/8=6 1/8</p>\n\n<p>It takes 8 people 6 1/8 minutes to paint 1 wall</p>\n\n<p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p>\n\n<p>8*(6 1/8)=49 minutes</p>\n\n<p>It takes 49 minutes for 8 people to paint 8 walls</p>\n" }, { "answer_id": 2079022, "score": 2, "text": "<p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p>\n\n<p>So one wall takes $\\frac{420}{5}=84$ minutes of work to paint.</p>\n\n<p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p>\n" }, { "answer_id": 2079191, "score": 1, "text": "<p>Exploit linear dependence. </p>\n\n<p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p>\n\n<p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p>\n\n<p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p>\n" }, { "answer_id": 2079195, "score": 1, "text": "<p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p>\n\n<blockquote>\n <p>It takes $60$\n minutes for $7$\n people to paint $5$\n walls. How many minutes does it take $10$\n people to paint $10$\n walls?</p>\n</blockquote>\n\n<p>Original time: $60$ minutes.</p>\n\n<p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p>\n\n<p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p>\n\n<p>($7\\times120 = 840$ minutes.)</p>\n\n<p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p>\n\n<hr>\n\n<p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p>\n\n<p>The \"formula\" is, of course, that \"walls per person per minute\" is a constant value.</p>\n\n<hr>\n\n<p>Actually <em>answering</em> such questions, I factually don't use any \"method\" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, \"What is $5+5$?\" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>\n" } ]
Question: Rates and Ratio work problem Answer 1: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 2: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 3: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 4: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 5: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 6: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 7: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 8: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 9: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 10: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 11: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 12: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 13: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 14: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 15: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 16: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 17: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 18: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 19: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 20: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 21: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 22: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 23: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 24: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 25: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 26: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 27: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 28: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 29: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 30: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 31: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 32: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 33: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 34: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 35: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 36: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 37: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 38: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 39: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 40: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 41: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 42: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 43: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 44: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 45: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 46: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 47: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 48: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 49: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 50: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 51: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 52: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 53: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 54: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 55: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 56: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 57: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 58: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 59: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 60: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 61: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 62: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 63: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 64: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 65: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 66: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 67: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 68: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 69: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 70: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 71: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 72: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 73: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 74: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 75: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 76: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 77: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 78: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 79: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 80: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 81: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 82: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 83: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 84: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 85: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 86: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 87: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 88: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 89: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 90: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 91: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 92: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 93: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 94: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 95: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 96: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 97: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 98: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 99: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 100: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 101: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 102: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 103: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 104: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 105: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 106: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 107: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 108: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p> Answer 109: <p>You are given that:</p> <blockquote> <p>It takes 60 minutes for 7 people to paint 5 walls.</p> </blockquote> <p><strong>Consider:</strong> Rate $\times$ Time $=$ Output.</p> <p>Let us consider "Rate" to mean the rate at which <strong>one person</strong> works.</p> <p>And rather than writing out the whole word, let us just write $R$.</p> <p>In the given: $7R \times 60$ minutes $=$ $5$ walls.</p> <p>Omitting units and rearranging, we have: $7R = 5/60 = 1/12$. </p> <p>Dividing both sides by $7$, we get $R = 1/84$.</p> <p>Then you ask:</p> <blockquote> <p>How many minutes does it take 10 people to paint 10 walls?</p> </blockquote> <p>Set up similarly, we have: $10R \times t$ minutes $= 10$ walls.</p> <p>Dividing both sides by $10$ and omitting units, we have: $R \times t = 1$.</p> <p>But we know $R = 1/84$, so this says: $t/84 = 1$.</p> <p>To finish off matters, multiply both sides by $84$ to obtain: $t = 84$.</p> <p>It takes <strong>eighty four minutes</strong>.</p> <hr> <p>Here is an alternative approach, just for fun. Omitting units throughout:</p> <p>In both scenarios, the <strong>rate</strong> of work is the same; we will use this to solve the problem.</p> <p><strong>Note:</strong> Rate $=$ Output $\div$ Time.</p> <p>In scenario one, the rate of one person working is: $5/(60\cdot 7).$</p> <p>In scenario two, the rate of one person working is: $10/(t\cdot 10).$</p> <p>We need to solve for $t$, but these expressions are equal. Let us simplify the resulting equation:</p> <p>$$\frac{5}{60 \cdot 7} = \frac{10}{t \cdot 10} \implies \frac{1}{12 \cdot 7} = \frac{1}{t}$$</p> <p>Equating denominators (or "cross multiplying") we find $t = 12 \cdot 7 = 84$.</p> Answer 110: <p>There is a formula you may find useful:</p> <p>If it takes time $T_1$ for $X_1$ people to do $Y_1$ things, and time $T_2$ for $X_2$ people to do $Y_2$ things, then $$\boxed{\frac{X_1T_1}{Y_1}\ =\ \frac{X_2T_2}{Y_2}}$$ In this particular problem, $T_1=60$, $X_1=7$, $Y_1=5$, $X_2=10$, $Y_2=10$; substituting in the formula gives $T_2=84$.</p> <p>Proof of the formula:</p> <p>If $X_1$ people can do $Y_1$ things, then 1 person can do $\dfrac{Y_1}{X_1}$ things in the same time, and so $X_2$ people can do $\dfrac{X_2Y_1}{X_1}$ in this time. Say, this time is $T_1$. Then in time 1, the same number of people $X_2$ can do $\dfrac{X_2Y_1}{X_1T_1}$ things so in time $T_2$ they can do $\dfrac{X_2Y_1T_2}{X_1T_1}$ things. Hence $$Y_2\ =\ \dfrac{X_2Y_1T_2}{X_1T_1}$$ which can be rearranged to the formula above.</p> Answer 111: <p>Here's a similar problem. You can look at this then work out yours.</p> <p><strong>Question:</strong> It takes 42 minutes for 7 people to paint 6 walls. How many minutes does it take 8 people to paint 8 walls?</p> <p><strong>Solution:</strong> It takes 42 minutes for 7 people to paint 6 walls 42/6=7 minutes per wall</p> <p>It takes 7 people 7 minutes to paint 1 wall</p> <p>Each person paints 1/7 of the wall in 7 minutes</p> <p>Each person paints 1/49 of the wall in 1 minute at the same rate...</p> <p>8 people paint 8/49 of 1 wall in 1 minute</p> <p>How many minutes does it take 8 people to paint 1 wall ? 49/8=6 1/8</p> <p>It takes 8 people 6 1/8 minutes to paint 1 wall</p> <p>It takes 8 people 8*(6 1/8) minutes to paint 8 walls</p> <p>8*(6 1/8)=49 minutes</p> <p>It takes 49 minutes for 8 people to paint 8 walls</p> Answer 112: <p>Seven people working for 60 minutes amounts to a total of 420 minutes of work to paint 5 walls.</p> <p>So one wall takes $\frac{420}{5}=84$ minutes of work to paint.</p> <p>So 10 walls requires 840 minutes to paint. And if you have 10 people working together, that comes to 84 minutes each.</p> Answer 113: <p>Exploit linear dependence. </p> <p>You have a relation between a number P of persons, a number M of minutes and a number W of walls painted. You know that P=7, M=60 gives W=5 and you are asked what is M such that P=10 gives W=10. So you should move from P=7 to P=10 and from W=5 to W=10 and keep track of the changes of M. </p> <p>What happens if you double the number of people P and keep the same amount of time M? Of course, the walls painted W gets doubled. The same is true if you multiply P by any number k: W gets multiplied by the same number. So from P=7, M=60, W=5 you multiply both P and W by 10/7 and find P=10, M=60, W=(10/7)5=50/7. </p> <p>Now what happens if you double the time M keeping P fixed? Of course the number of walls painted W also doubles. The same is true if you multiply M by any number, you get W multiplied by the same number. You need to find anumber k such that W goes from 50/7 to 10, so (50/7)k=10 which means k=(7/50)10 = 7/5. The time M then goes from 60 to (7/5)60 which is 84.</p> Answer 114: <p>These problems are not difficult to solve in your head, even without an explicit algebraic formula:</p> <blockquote> <p>It takes $60$ minutes for $7$ people to paint $5$ walls. How many minutes does it take $10$ people to paint $10$ walls?</p> </blockquote> <p>Original time: $60$ minutes.</p> <p>Double the count of walls from $5$ to $10$ -> double the time to $120$ minutes.</p> <p>Pretend only $1$ person instead of the current $7$ -> multiply the time by $7$.</p> <p>($7\times120 = 840$ minutes.)</p> <p>Now with $10$ times as many people ($10$ instead of $1$), it will take a tenth of that amount of time, which is $84$ minutes.</p> <hr> <p>Of course I advise you to understand the formula. This method <em>is</em> the exact same steps as you will take by computing the formula algebraically, but I have found students who trip on the algebraic manipulation of variables are able to pick up this approach quite easily, and <em>then</em> become able to make sense of the algebra when they couldn't before. Hence I feel it is a valuable approach worth mentioning.</p> <p>The "formula" is, of course, that "walls per person per minute" is a constant value.</p> <hr> <p>Actually <em>answering</em> such questions, I factually don't use any "method" personally as the calculations are done too fast to illustrate, whether through formulas or sentences. How do you answer the question, "What is $5+5$?" If you actually count $5$ and then $5$ more I feel bad for you. But, steps <em>can</em> be illustrated and followed through until you achieve the ability to just instantly state the answer.</p>
Is there another way to solve this differential equation?
2,079,561
9
https://math.stackexchange.com/questions/2079561/is-there-another-way-to-solve-this-differential-equation
<p>To put this into some context, I was considering the general equation for damped harmonic motion:</p> <p>$\ddot z +2\gamma\dot z +\omega_0^2z=0$</p> <p>and specifically there limiting cases where the the damping force (to which the second term corresponds) is zero, and where the restoring force (to which the last term corresponds) is zero.</p> <p>In the second case, I yield the differential equation </p> <p>$\ddot z +2\gamma\dot z=0$</p> <p>and I solved this in the following way:</p> <p>I split $\ddot z$ into $\frac{d}{dt}\dot z$ and then separated the vairables:</p> <p>$\int\frac{1}{\dot z}d\dot z =\int-2\gamma dt$</p> <p>which gives the exponential solution expected.</p> <p>Now I am considering the case where there is simple harmonic motion with no damping, i.e. the case that gives rise to the differential equation:</p> <p>$\ddot x +\omega_0^2 x=0$</p> <p>I can't figre out- is there a way to solve this without just assuming an exponential solution and finding the constants in the solution? For example, in the equation above I was able to solve it by separating the variables. I'm not so sure this would work here where we have a second derivative of x and x itself, but i'm not sure.</p> <p>Also, I would be grateful if anyone could recommend an online resource or book that explains how to solve different types of integrals and differential equations well. I have been trying to study this for some time but haven't come across anything that seems very complete, and perhaps something that has some worked examples?</p>
[ { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" }, { "answer_id": 2079577, "score": 6, "text": "<p>One, not-entirely-standard solution (though it does generalise, to some extent): </p>\n\n<p>Write your equation as $\\frac{\\ddot x}{x}=-\\omega^2$, and let $u=\\frac{\\dot x}{x}$. </p>\n\n<p>Then $\\dot u=\\frac{\\ddot x}{x}-\\left(\\frac{\\dot x}{x}\\right)^2\\implies \\dot u+u^2+\\omega^2=0\\implies\\frac{\\omega\\dot u}{u^2+\\omega^2}+\\omega=0$.</p>\n\n<p>From this we see:</p>\n\n<p>$$\\dot{\\left(\\arctan\\frac{u}{\\omega}\\right)}+\\omega=0\\implies u=-\\omega\\tan(\\omega t+c)$$</p>\n\n<p>for some $c$. Thus</p>\n\n<p>$$\\frac{\\dot x}{x}=-\\omega\\tan(\\omega t+c)\\implies \\dot{(\\log x)}=\\dot{(\\log \\cos (\\omega t+c))}$$</p>\n\n<p>and we deduce that $x=R\\cos(\\omega t+c)$ for some $R,c$.</p>\n" }, { "answer_id": 2079621, "score": 2, "text": "<p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\\gamma$ and $\\omega$ without any limitations by using the concept of the <a href=\"http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx\" rel=\"nofollow noreferrer\">characteristic polynomial</a>.</p>\n\n<p>$$\\frac{d^2 z}{dt^2}+2\\gamma\\frac{dz}{dt}+\\omega^2 z=0$$</p>\n\n<p>We will assume the form of the solution to be $z(t)=e^{\\lambda t}$</p>\n\n<p>We will substitute this into our differential equation to obtain:</p>\n\n<p>$$\\lambda^2 e^{\\lambda t}+2\\gamma\\lambda e^{\\lambda t}+\\omega^2 e^{\\lambda t}=0$$</p>\n\n<p>$$e^{\\lambda t}(\\lambda^2+2\\gamma\\lambda+\\omega^2)=0$$</p>\n\n<p>We deduce that $e^{\\lambda t}$ will only equal zero for non-zero values of $t$ if $\\lambda \\to -\\infty$. Hence, this is a trivial solution.</p>\n\n<p>Thus, we must only consider the solution to $\\lambda$ on $\\lambda^2+2\\gamma\\lambda+\\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p>\n\n<p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p>\n\n<p>$$\\lambda=\\frac{-2\\gamma \\pm \\sqrt{4\\gamma^2-4\\omega^2}}{2}=-\\gamma \\pm \\sqrt{\\gamma^2-\\omega^2}$$</p>\n\n<p>We denote $\\lambda_1$ and $\\lambda_2$ to be the roots of the above equation.</p>\n\n<p>We notice that depending on the values of the constants $\\gamma$ and $\\omega$, the solutions to $\\lambda$ will be different, and so hence our general solutions will be significantly different.</p>\n\n<p>We separate each case:</p>\n\n<ol>\n<li>If both roots are real and distinct (i.e. $\\lambda_1 \\neq \\lambda_2$)</li>\n<li>If both roots are complex.</li>\n<li>Real repeated roots (i.e. $\\lambda_1=\\lambda_2$)</li>\n</ol>\n\n<p><strong>Case 1</strong></p>\n\n<p>Since the particlular solutions are $z_1(t)=e^{\\lambda_1 t}$ and $z_2(t)=e^{\\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p>\n\n<p>$$z(t)=k_1 e^{\\lambda_1 t}+k_2 e^{\\lambda_2 t}$$</p>\n\n<p><strong>Case 2</strong></p>\n\n<p>We denote a complex number by $\\lambda=a+bi$.</p>\n\n<p>Since $\\lambda_{1,2}$ will both be complex, using $e^{i \\theta}=\\cos{\\theta}+i\\sin{\\theta}$, we obtain the general solution:</p>\n\n<p>$$z(t)=k_1 e^{at} \\cos{(bt)} + k_2 e^{at} \\sin{(bt)}$$</p>\n\n<p><strong>Case 3</strong></p>\n\n<p>$$z(t)=k_1 e^{\\lambda t}+k_2 t e^{\\lambda t}$$</p>\n\n<p><strong>END CASES</strong></p>\n\n<p>Hence, for the solution to the differential equation you wanted to find, $\\ddot{z}+\\omega z=0$, let $\\gamma=0$. Since both solutions to $\\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p>\n\n<p>$$z(t)=k_1 \\cos{(\\omega t)} + k_2 \\sin{(\\omega t)}$$</p>\n\n<p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p>\n\n<p>$$z(t)=z_0 \\cos{(\\omega t)} + \\frac{v_0}{\\omega} \\sin{(\\omega t)}$$</p>\n\n<p>Which can be written in the form:</p>\n\n<p>$$z(t)=A (\\cos{\\omega t+\\varphi})$$</p>\n\n<p>For more information, see:</p>\n\n<p><a href=\"http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx\" rel=\"nofollow noreferrer\">Real, Distinct Roots</a>, <a href=\"http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx\" rel=\"nofollow noreferrer\">Complex Roots</a> and <a href=\"http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx\" rel=\"nofollow noreferrer\">Repeated Roots</a></p>\n" }, { "answer_id": 2079874, "score": 2, "text": "<p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p>\n\n<p>Let $ v = \\dot z $. Then $ \\dot v = \\ddot z = - 2\\gamma\\dot z - \\omega_0^2z $.</p>\n\n<p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely\n$$ \\begin{cases} \\dot z = v \\\\ \\dot v = - 2\\gamma v - \\omega_0^2z \\end{cases} $$</p>\n\n<p>Which can be written in matrix form by setting\n$ x = \\begin{bmatrix} z \\\\ v \\end{bmatrix} $ and $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; - 2\\gamma \\end{bmatrix} $.</p>\n\n<p>Using this notation the system simply becomes $$ \\dot x = Ax $$</p>\n\n<p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \\in \\mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p>\n\n<p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \\dot x = Ax $ ?</em></p>\n\n<p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p>\n\n<p>$$ e^A := \\sum _{ k=0 }^{ \\infty }{ \\frac { 1 }{ k! } { A }^{ k } } $$</p>\n\n<p>Even more astonishing is that this definition even works for complex matrices!</p>\n\n<p>Now back to the particular case $ \\ddot x +\\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p>\n\n<p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p>\n\n<p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by\n$$ x(t) = \\sum _{ i=1 }^{ n }{ c_i e^{\\lambda_i t} v_i } $$\nwhere $c_i$ are complex constants. Note that this is complex exponentiation.</p>\n\n<p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \\dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\\lambda_1, ..., \\lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p>\n\n<p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p>\n\n<p><strong>Solution.</strong> Writing the equation associated matrix $ A = \\begin{bmatrix} 0 &amp; 1 \\\\ - \\omega_0^2 &amp; 0 \\end{bmatrix} $ we find its eigenvalues $ \\lambda_1 = -i w_0 $ and $ \\lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by:\n$$ x(t) = \\begin{bmatrix} z(t) \\\\ v(t) \\end{bmatrix} = \\begin{bmatrix} k_1 cos(\\omega_0 t) + k_2 sin(\\omega_0 t) \\\\ - k_1 sin(\\omega_0 t) + k_2 cos(\\omega_0 t) \\end{bmatrix} $$</p>\n\n<p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p>\n\n<p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>\n" } ]
Question: Is there another way to solve this differential equation? Answer 1: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 2: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 3: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 4: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 5: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 6: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 7: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 8: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 9: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 10: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 11: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 12: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 13: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 14: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 15: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 16: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 17: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 18: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 19: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 20: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 21: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 22: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 23: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 24: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 25: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 26: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 27: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 28: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 29: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 30: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 31: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 32: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 33: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 34: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 35: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 36: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 37: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 38: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 39: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 40: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 41: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 42: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 43: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 44: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 45: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 46: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 47: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 48: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 49: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 50: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 51: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 52: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 53: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 54: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p> Answer 55: <p>One, not-entirely-standard solution (though it does generalise, to some extent): </p> <p>Write your equation as $\frac{\ddot x}{x}=-\omega^2$, and let $u=\frac{\dot x}{x}$. </p> <p>Then $\dot u=\frac{\ddot x}{x}-\left(\frac{\dot x}{x}\right)^2\implies \dot u+u^2+\omega^2=0\implies\frac{\omega\dot u}{u^2+\omega^2}+\omega=0$.</p> <p>From this we see:</p> <p>$$\dot{\left(\arctan\frac{u}{\omega}\right)}+\omega=0\implies u=-\omega\tan(\omega t+c)$$</p> <p>for some $c$. Thus</p> <p>$$\frac{\dot x}{x}=-\omega\tan(\omega t+c)\implies \dot{(\log x)}=\dot{(\log \cos (\omega t+c))}$$</p> <p>and we deduce that $x=R\cos(\omega t+c)$ for some $R,c$.</p> Answer 56: <p>Since your second order linear ordinary differential equation has constant coefficients, you can <strong>generalise the solution</strong> to your equation (below) for any values of $\gamma$ and $\omega$ without any limitations by using the concept of the <a href="http://tutorial.math.lamar.edu/Classes/DE/HOHomogeneousDE.aspx" rel="nofollow noreferrer">characteristic polynomial</a>.</p> <p>$$\frac{d^2 z}{dt^2}+2\gamma\frac{dz}{dt}+\omega^2 z=0$$</p> <p>We will assume the form of the solution to be $z(t)=e^{\lambda t}$</p> <p>We will substitute this into our differential equation to obtain:</p> <p>$$\lambda^2 e^{\lambda t}+2\gamma\lambda e^{\lambda t}+\omega^2 e^{\lambda t}=0$$</p> <p>$$e^{\lambda t}(\lambda^2+2\gamma\lambda+\omega^2)=0$$</p> <p>We deduce that $e^{\lambda t}$ will only equal zero for non-zero values of $t$ if $\lambda \to -\infty$. Hence, this is a trivial solution.</p> <p>Thus, we must only consider the solution to $\lambda$ on $\lambda^2+2\gamma\lambda+\omega^2=0$ (<strong>Our characteristic polynomial</strong>).</p> <p>We solve this using the quadratic formula. Therefore, the two roots of the characteristic polynomial will be:</p> <p>$$\lambda=\frac{-2\gamma \pm \sqrt{4\gamma^2-4\omega^2}}{2}=-\gamma \pm \sqrt{\gamma^2-\omega^2}$$</p> <p>We denote $\lambda_1$ and $\lambda_2$ to be the roots of the above equation.</p> <p>We notice that depending on the values of the constants $\gamma$ and $\omega$, the solutions to $\lambda$ will be different, and so hence our general solutions will be significantly different.</p> <p>We separate each case:</p> <ol> <li>If both roots are real and distinct (i.e. $\lambda_1 \neq \lambda_2$)</li> <li>If both roots are complex.</li> <li>Real repeated roots (i.e. $\lambda_1=\lambda_2$)</li> </ol> <p><strong>Case 1</strong></p> <p>Since the particlular solutions are $z_1(t)=e^{\lambda_1 t}$ and $z_2(t)=e^{\lambda_2 t}$, they will yield the general solution when substituted to $z(t)=k_1 z_1(t) + k_2 z_2(t)$.</p> <p>$$z(t)=k_1 e^{\lambda_1 t}+k_2 e^{\lambda_2 t}$$</p> <p><strong>Case 2</strong></p> <p>We denote a complex number by $\lambda=a+bi$.</p> <p>Since $\lambda_{1,2}$ will both be complex, using $e^{i \theta}=\cos{\theta}+i\sin{\theta}$, we obtain the general solution:</p> <p>$$z(t)=k_1 e^{at} \cos{(bt)} + k_2 e^{at} \sin{(bt)}$$</p> <p><strong>Case 3</strong></p> <p>$$z(t)=k_1 e^{\lambda t}+k_2 t e^{\lambda t}$$</p> <p><strong>END CASES</strong></p> <p>Hence, for the solution to the differential equation you wanted to find, $\ddot{z}+\omega z=0$, let $\gamma=0$. Since both solutions to $\lambda$ are purely imaginary (i.e. Complex with $a=0$), the form of the solution will be <strong>Case 2</strong>. The general solution is hence:</p> <p>$$z(t)=k_1 \cos{(\omega t)} + k_2 \sin{(\omega t)}$$</p> <p>You can substitute your initial conditions into the equation to obtain the values of the arbitrary constants $k_1$ and $k_2$ for simple harmonic motion as:</p> <p>$$z(t)=z_0 \cos{(\omega t)} + \frac{v_0}{\omega} \sin{(\omega t)}$$</p> <p>Which can be written in the form:</p> <p>$$z(t)=A (\cos{\omega t+\varphi})$$</p> <p>For more information, see:</p> <p><a href="http://tutorial.math.lamar.edu/Classes/DE/RealRoots.aspx" rel="nofollow noreferrer">Real, Distinct Roots</a>, <a href="http://tutorial.math.lamar.edu/Classes/DE/ComplexRoots.aspx" rel="nofollow noreferrer">Complex Roots</a> and <a href="http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx" rel="nofollow noreferrer">Repeated Roots</a></p> Answer 57: <p>Let's first make clear that <strong>we are not assuming</strong> that the solution for this differential equation has exponential form. This equation belongs to a class of differential equations whose have simple solutions and <strong>indeed have exponential form</strong> on a more subtle sense. Now we will find out what class of differential equations are we talking about by defining a new variable.</p> <p>Let $ v = \dot z $. Then $ \dot v = \ddot z = - 2\gamma\dot z - \omega_0^2z $.</p> <p>So we can see that this equation is actually is a <em>system of homogenous linear differential equations</em>. Namely $$ \begin{cases} \dot z = v \\ \dot v = - 2\gamma v - \omega_0^2z \end{cases} $$</p> <p>Which can be written in matrix form by setting $ x = \begin{bmatrix} z \\ v \end{bmatrix} $ and $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; - 2\gamma \end{bmatrix} $.</p> <p>Using this notation the system simply becomes $$ \dot x = Ax $$</p> <p>This equation doesn't remember you anything? You should be familiar with this equation when $ A \in \mathbb{R} $ and its solution $ x(t) = ke^{At} $ where $k$ is some real constant. It is from here that arises the motivation for matrix exponentiation. Consider the following question:</p> <p><em>It is possible to extend the definition exponentiation from real numbers to real square matrices such that it preserves all its properties including the function $ x(t) = ke^{At} $ being the solution of the (now matrix) equation $ \dot x = Ax $ ?</em></p> <p>The answer is yes and it is done naturally by just subtituting real numbers by real matrices in the power series definition of the exponential function. That is</p> <p>$$ e^A := \sum _{ k=0 }^{ \infty }{ \frac { 1 }{ k! } { A }^{ k } } $$</p> <p>Even more astonishing is that this definition even works for complex matrices!</p> <p>Now back to the particular case $ \ddot x +\omega_0^2 x=0 $, the reason its real solution is written in terms of trigonometric functions is that its correspoding matrix has complex eigenvalues and Euler's formula $$ e^{iz} = cos(z) + i sin(z) $$ plays a role here.</p> <p>By using the following two propositions which can be proven true and Euler's Formula we derive the general real solution from the general complex solution.</p> <p><strong>Proposition 1.</strong> The general <em>complex</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is given by $$ x(t) = \sum _{ i=1 }^{ n }{ c_i e^{\lambda_i t} v_i } $$ where $c_i$ are complex constants. Note that this is complex exponentiation.</p> <p><strong>Proposition 2.</strong> The general <em>real</em> solution of the linear system $ \dot x = Ax $ where $A$ is a $n$ x $n$ real matrix with $n$ distinct (possible complex) eigenvalues $\lambda_1, ..., \lambda_n$ associated with eigenvectors $v_1, ..., v_n$ is a linear combination of real and imaginary parts of complex solutions. That is, if $x(t)$ is a complex solution, then $y(t) = k_1 Re(x(t)) + k_2 Im(x(t))$ is a real solution, where $k_1$ and $k_2$ are real constants.</p> <p>(The details are rough but this is supposed to be just a glimpse of the theory behind)</p> <p><strong>Solution.</strong> Writing the equation associated matrix $ A = \begin{bmatrix} 0 &amp; 1 \\ - \omega_0^2 &amp; 0 \end{bmatrix} $ we find its eigenvalues $ \lambda_1 = -i w_0 $ and $ \lambda_2 = i w_0 $ with corresponding eigenvectors $v_1$ and $v_2$. Now we apply (Prop. 1) to get the complex general solution, followed by Euler's formula to get complex linear combinations of trigonometric functions and finnaly <em>(Prop. 2)</em> with some algebra to arrive at the general real solution given by: $$ x(t) = \begin{bmatrix} z(t) \\ v(t) \end{bmatrix} = \begin{bmatrix} k_1 cos(\omega_0 t) + k_2 sin(\omega_0 t) \\ - k_1 sin(\omega_0 t) + k_2 cos(\omega_0 t) \end{bmatrix} $$</p> <p>You have to go through a bit of theory but after it is done it gets much more easier to find the general solution for this class of differential equations.</p> <p>I hope I brought up some insight of a natural way to dealing with this kind of equations. You can find the details, worked examples and more about in the great book entitled <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> by Hirsch, Smale and Devaney.</p>
Law of exponent for complex numbers
2,079,710
9
https://math.stackexchange.com/questions/2079710/law-of-exponent-for-complex-numbers
<p>Why does the following law</p> <p>$$(z_1z_2)^w=z_1^wz_2^w$$</p> <p>sometimes fail, such as when $z_1=z_2=-1$ and $w=-i$?</p> <p>And what are the conditions I need for $z_1$ and $z_2$ so that the above identity hold for all complex values of w?</p>
[ { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" }, { "answer_id": 2079751, "score": 6, "text": "<p>The if we write $z_i = r_ie^{i\\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\\ln(r_1r_2)x}e^{i\\ln(r_1r_2)y}e^{i(\\theta_1+\\theta_2) x}e^{-y(\\theta_1 + \\theta_2)}$$</p>\n\n<p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\\theta_1+\\theta_2$ mod $2\\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\\theta_1+\\theta_2)}$ which will be a factor of $e^{2\\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\\theta_1 + \\theta_2 &lt; 2\\pi$. </p>\n\n<p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\\theta_1 = \\theta_2 = \\pi$ so you reduce $\\theta_1 + \\theta_2 = 2\\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\\pi}.$</p>\n" }, { "answer_id": 2079754, "score": 1, "text": "<p>Note first that when <span class=\"math-container\">$w=0$</span>, then clearly the equality holds. So let <span class=\"math-container\">$w \\neq 0$</span>. Then\n<span class=\"math-container\">$$(z_1z_2)^w = z_1^wz_2^w \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w \\ln(z_1))\\exp(w \\ln(z_2)) \\iff \\exp(w \\ln(z_1z_2)) = \\exp(w(\\ln(z_1)+\\ln(z_2))) \\iff w \\ln(z_1z_2) = w(\\ln(z_1)+\\ln(z_2)) \\iff \\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2)$$</span>\nHence we found the required condition on <span class=\"math-container\">$z_1$</span> and <span class=\"math-container\">$z_2$</span> such that the equality holds, namely\n<span class=\"math-container\">$$\\ln(z_1z_2) = \\ln(z_1)+\\ln(z_2).$$</span></p>\n" } ]
Question: Law of exponent for complex numbers Answer 1: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 2: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 3: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 4: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 5: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 6: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 7: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 8: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 9: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 10: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 11: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 12: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 13: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 14: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 15: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 16: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 17: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 18: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 19: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 20: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 21: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 22: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 23: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 24: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 25: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 26: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 27: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 28: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 29: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 30: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 31: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 32: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 33: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 34: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 35: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 36: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p> Answer 37: <p>The if we write $z_i = r_ie^{i\theta_i}$ and $w = x + iy$ then we have $$(z_1z_2)^w = e^{\ln(r_1r_2)x}e^{i\ln(r_1r_2)y}e^{i(\theta_1+\theta_2) x}e^{-y(\theta_1 + \theta_2)}$$</p> <p>The same formal identity holds for $z_1^wz_2^w.$ The reason the two calculations come out different is that when calculating $(z_1z_2)^w,$ you reduce $\theta_1+\theta_2$ mod $2\pi$ after you multiply $z_1$ and $z_2$, but before you exponentiate. Thus the problematic factor is $e^{-y(\theta_1+\theta_2)}$ which will be a factor of $e^{2\pi y}$ different between the two calculations if such a reduction takes place. Therefore the two calculations come out the same whenever $\theta_1 + \theta_2 &lt; 2\pi$. </p> <p>That's not the case in your example, since for $z_1 = z_2 = -1,$ $\theta_1 = \theta_2 = \pi$ so you reduce $\theta_1 + \theta_2 = 2\pi$ to $0$ in one of the calculations. Thus the answers are different by a factor of $e^{2\pi}.$</p> Answer 38: <p>Note first that when <span class="math-container">$w=0$</span>, then clearly the equality holds. So let <span class="math-container">$w \neq 0$</span>. Then <span class="math-container">$$(z_1z_2)^w = z_1^wz_2^w \iff \exp(w \ln(z_1z_2)) = \exp(w \ln(z_1))\exp(w \ln(z_2)) \iff \exp(w \ln(z_1z_2)) = \exp(w(\ln(z_1)+\ln(z_2))) \iff w \ln(z_1z_2) = w(\ln(z_1)+\ln(z_2)) \iff \ln(z_1z_2) = \ln(z_1)+\ln(z_2)$$</span> Hence we found the required condition on <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span> such that the equality holds, namely <span class="math-container">$$\ln(z_1z_2) = \ln(z_1)+\ln(z_2).$$</span></p>
Solve $(x-2)^6+(x-4)^6=64$ using a substitution.
2,079,939
9
https://math.stackexchange.com/questions/2079939/solve-x-26x-46-64-using-a-substitution
<p>Solve $(x-2)^6+(x-4)^6=64$ using a substitution. </p> <p>I tired using $t=x-2$.</p> <p>But again I have to know the expansion of the power $6$.</p> <p>Can it be transformed to a quadratic equation ?</p> <p><strong>I know that it can be somehow solved by expanding the powers. But I'm trying to get a good transformation by a sub.</strong></p>
[ { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" }, { "answer_id": 2079956, "score": 18, "text": "<p>If you put $t = x-3$, you have\n$$\n(t-1)^6 + (t+1)^6 = 64\n$$\nthe odd terms cancel out when you expand the LHS, so you get \n$$\nu^3 + 15u^2 + 15u + 1 = 32\n$$</p>\n\n<p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\\pm1$ and $x = 4,2$) is a solution, so can factor to \n$$\n(u^2 + 16u + 31)(u-1) = 0\n$$</p>\n" }, { "answer_id": 2079961, "score": 4, "text": "<p>Put $y=x-4$</p>\n\n<p>$$(y+2)^6+y^6=2^6$$</p>\n\n<p>It should be clear that $0$ and $-2$ are solutions. </p>\n\n<p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \\in (-\\infty,-1)$. And increasing for $(-1,\\infty)$, so it at most has two roots.</p>\n" }, { "answer_id": 2079966, "score": 1, "text": "<p>Let $x-2=a$ and $4-x=b$.</p>\n\n<p>Hence, $a+b=2$ and $a^6+b^6=64$ or \n$$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or\n$$(2-ab)(a^2b^2-16ab+16)=32$$ or\n$$ab(a^2b^2-18ab+48)=0.$$ </p>\n\n<p>$ab=0$ gives $x=2$ or $x=4$.\n$ab=9-\\sqrt{33}$ does not give a real roots and $ab=9+\\sqrt{33}$ does not give a real roots.</p>\n\n<p>Id est, the answer is $\\{2,4\\}$.</p>\n" } ]
Question: Solve $(x-2)^6+(x-4)^6=64$ using a substitution. Answer 1: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 2: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 3: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 4: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 5: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 6: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 7: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 8: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 9: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 10: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 11: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 12: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 13: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 14: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 15: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 16: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 17: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 18: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 19: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 20: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 21: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 22: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 23: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 24: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 25: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 26: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 27: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 28: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 29: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 30: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 31: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 32: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 33: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 34: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 35: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 36: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 37: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 38: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 39: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 40: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 41: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 42: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 43: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 44: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 45: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 46: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 47: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 48: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 49: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 50: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 51: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 52: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 53: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 54: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p> Answer 55: <p>If you put $t = x-3$, you have $$ (t-1)^6 + (t+1)^6 = 64 $$ the odd terms cancel out when you expand the LHS, so you get $$ u^3 + 15u^2 + 15u + 1 = 32 $$</p> <p>where $u = (x-3)^2$. From looking at either the initial equation or this one, it's clear that $u = 1$ (corresponding to $t=\pm1$ and $x = 4,2$) is a solution, so can factor to $$ (u^2 + 16u + 31)(u-1) = 0 $$</p> Answer 56: <p>Put $y=x-4$</p> <p>$$(y+2)^6+y^6=2^6$$</p> <p>It should be clear that $0$ and $-2$ are solutions. </p> <p>The function $f(y)=(y+2)^6+y^6-2^6$ is decreasing for $y \in (-\infty,-1)$. And increasing for $(-1,\infty)$, so it at most has two roots.</p> Answer 57: <p>Let $x-2=a$ and $4-x=b$.</p> <p>Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ </p> <p>$ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots.</p> <p>Id est, the answer is $\{2,4\}$.</p>
Linearity of expectations - Why does it hold intuitively even when the r.v.s are correlated?
2,080,030
9
https://math.stackexchange.com/questions/2080030/linearity-of-expectations-why-does-it-hold-intuitively-even-when-the-r-v-s-are
<p>An experiment - say rolling a die, is performed a large number of times, $n$. Let $X$ and $Y$ be two random variables that summarize this experiment. </p> <p>Intuitively(by the law of large numbers), if I observe the values of $X$, over a large number of trials, take their mean, $m_{X}=\frac{1}{n}\sum_{i}{x_{i}}$, and observe the values of $Y$, take their mean $m_{Y}=\frac{1}{n}\sum_{i}{y_{i}}$ and the add the two column means, this is very close to $E(X)+E(Y)$.</p> <p>If we observe the values of $X+Y$ in a third column, and take their arithmetic mean, $m_{X+Y}$, this will be very close to $E(X+Y)$. </p> <p>Therefore, linearity of expectation, that $E(X+Y)=E(X)+E(Y)$ emerges as a simple fact of arithmetic (we're just adding two numbers in different orders).</p> <p>I know linearity of expectations holds, even when the $X$ and $Y$ are dependent. For example, the binomial and hypergeometric expectation is $E(X)=np$, although in the binomial story, the $Bern(p)$ random variables are i.i.d., but in the hypergeometric story, they are dependent.</p> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated? Any insight or intuition would be great!</p>
[ { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" }, { "answer_id": 2080035, "score": 13, "text": "<p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then\n$$ E(X+Y) = \\frac{(x_1+y_1)+(x_2+y_2)+\\cdots+(x_n+y_n)}n $$\nand\n$$ E(X)+E(Y) = \\frac{x_1+x_2+\\cdots+x_n}n + \\frac{y_1+y_2+\\cdots+y_n}n $$\nwhich is obviously the same.</p>\n" }, { "answer_id": 2080049, "score": 6, "text": "<p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p>\n" }, { "answer_id": 2080045, "score": 2, "text": "<blockquote>\n <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p>\n</blockquote>\n\n<p>Being correlated or uncorrelated matters when we have $\\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\\mathsf{E}(XY)=\\mathsf{E}(X)\\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p>\n" }, { "answer_id": 2711852, "score": 2, "text": "<p>Here's my non-rigorous attempt:</p>\n\n<p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p>\n\n<p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \\sum_i E[X|A_i]P(A_i)$$</p>\n\n<p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p>\n\n<p>For reference, <a href=\"https://math.stackexchange.com/a/2551102/122489\">another good answer</a>. </p>\n" }, { "answer_id": 4492549, "score": 2, "text": "<p>I will give a reasoning considering the dependence between <span class=\"math-container\">$X$</span> and <span class=\"math-container\">$Y$</span>. <span class=\"math-container\">\\begin{align} E(X+Y) &amp; = \\sum_y\\sum_x(x+y)p(x,y) \\\\&amp; =\\sum_y\\sum_x(x+y)p(x)p(y|x) \\\\ &amp;= \\sum_y\\sum_xxp(x)p(y|x) + \\sum_y\\sum_xyp(x)p(y|x) \\\\&amp;=\\sum_xxp(x)\\sum_yp(y|x) + \\sum_xp(x)\\sum_yyp(y|x) \\\\&amp;=\\sum_xxp(x) + \\sum_xp(x)E(Y|X=x) \\\\&amp;=E(X) + E_X(E(Y|X)) \\\\&amp;=E(X) + E(Y) \\end{align}</span>\nSo <span class=\"math-container\">$E(X+Y)$</span> can be transformed into any of the following three: <span class=\"math-container\">$E(X + E(Y|X))$</span>, <span class=\"math-container\">$E(X) + E_X(E(Y|X))$</span>, <span class=\"math-container\">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>\n" } ]
Question: Linearity of expectations - Why does it hold intuitively even when the r.v.s are correlated? Answer 1: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 2: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 3: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 4: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 5: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 6: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 7: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 8: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 9: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 10: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 11: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 12: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 13: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 14: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 15: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 16: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 17: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 18: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 19: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 20: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 21: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 22: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 23: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 24: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 25: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 26: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 27: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 28: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 29: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 30: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 31: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 32: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 33: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 34: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 35: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 36: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 37: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 38: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 39: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 40: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 41: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 42: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 43: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 44: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 45: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 46: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 47: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 48: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 49: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 50: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 51: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 52: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 53: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 54: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 55: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 56: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 57: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 58: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 59: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 60: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 61: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 62: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 63: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 64: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 65: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 66: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 67: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 68: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 69: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 70: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 71: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 72: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 73: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 74: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 75: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 76: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 77: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 78: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 79: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 80: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 81: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 82: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 83: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 84: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 85: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 86: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 87: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 88: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 89: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 90: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p> Answer 91: <p>For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be <em>approximated</em> by something of this form). Then $$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$ and $$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$ which is obviously the same.</p> Answer 92: <p>Say you have $X$ and $Y$ independent and then you turn up the correlation. Say they're mean zero too, just for simplicity. Then $X+Y$ will still be positive just as often on average as it's negative. It's just that it will be more likely that X and Y are positive or negative together. Thus the mean of $X+Y$ stays zero. However, it does increase the variance since $X+Y$ will tend to be larger in magnitude cause $X$ and $Y$ have the same sign more often.</p> Answer 93: <blockquote> <p>If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated?</p> </blockquote> <p>Being correlated or uncorrelated matters when we have $\mathsf{E}(XY)$ terms. So for uncorrelated RVs, $\mathsf{E}(XY)=\mathsf{E}(X)\mathsf{E}(Y)$. Obviously, when we consider expectation of sum of RVs, or sum of expectations, no such terms appear. Hence being correlated or not does not change anything. </p> Answer 94: <p>Here's my non-rigorous attempt:</p> <p>We know by the law of total expectation that $$E[X] = E[E[X|Y]]$$</p> <p>or in a special case, intuitively if $A_i$s partition the sample space $$E[X] = \sum_i E[X|A_i]P(A_i)$$</p> <p>That means even when $X$ is dependent on $Y$, $E[X]$ already knows about and has accounted for this dependency! How $X$ and $Y$ affect each other were known to $E[X]$ and $E[Y]$. </p> <p>For reference, <a href="https://math.stackexchange.com/a/2551102/122489">another good answer</a>. </p> Answer 95: <p>I will give a reasoning considering the dependence between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. <span class="math-container">\begin{align} E(X+Y) &amp; = \sum_y\sum_x(x+y)p(x,y) \\&amp; =\sum_y\sum_x(x+y)p(x)p(y|x) \\ &amp;= \sum_y\sum_xxp(x)p(y|x) + \sum_y\sum_xyp(x)p(y|x) \\&amp;=\sum_xxp(x)\sum_yp(y|x) + \sum_xp(x)\sum_yyp(y|x) \\&amp;=\sum_xxp(x) + \sum_xp(x)E(Y|X=x) \\&amp;=E(X) + E_X(E(Y|X)) \\&amp;=E(X) + E(Y) \end{align}</span> So <span class="math-container">$E(X+Y)$</span> can be transformed into any of the following three: <span class="math-container">$E(X + E(Y|X))$</span>, <span class="math-container">$E(X) + E_X(E(Y|X))$</span>, <span class="math-container">$E(X)+E(Y)$</span>. The first one is more intuitive, but the last one is more simple.</p>
How to describe this structure in spoken words?
2,080,080
9
https://math.stackexchange.com/questions/2080080/how-to-describe-this-structure-in-spoken-words
<p>$$\large f(0) = P(X = 0) = \frac{\binom{3}{0}\binom{17}{2}}{\binom{20}{2}} = \frac{68}{95},$$</p> <p>I'm working on annotating math (probability and statistics) for the visually impaired that is to be read aloud by a screen reader and don't know what to put for this structure. </p> <p>As of now, I've just got something like, "$f$ of zero equals", then I am not sure how to phrase $P(X=0)$ (is it the probability where $X = 0? $), and then really have no idea about those structures in parentheses in the quotient portion. </p> <p>Thank you.</p>
[ { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" }, { "answer_id": 2080089, "score": 19, "text": "<p>If this is at a beginning level:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p>\n</blockquote>\n\n<p>If the students are a bit more advanced, you can compress it a bit:</p>\n\n<blockquote>\n <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p>\n</blockquote>\n" }, { "answer_id": 2080101, "score": 6, "text": "<p>There are so many different ways that you can do this. Fractions can be read as \"[something] divided by [something else]\", \"numerator: [something] denominator: [something else]\" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p>\n\n<blockquote>\n <p>\"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\\text{ths}}$\"</p>\n</blockquote>\n" }, { "answer_id": 2080082, "score": 3, "text": "<p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p>\n" }, { "answer_id": 2080085, "score": 3, "text": "<p>\"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$\".</p>\n" }, { "answer_id": 2080092, "score": 3, "text": "<p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>\n" } ]
Question: How to describe this structure in spoken words? Answer 1: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 2: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 3: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 4: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 5: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 6: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 7: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 8: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 9: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 10: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 11: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 12: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 13: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 14: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 15: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 16: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 17: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 18: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 19: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 20: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 21: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 22: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 23: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 24: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 25: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 26: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 27: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 28: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 29: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 30: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 31: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 32: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 33: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 34: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 35: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 36: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 37: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 38: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 39: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 40: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 41: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 42: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 43: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 44: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 45: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 46: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 47: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 48: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 49: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 50: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 51: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 52: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 53: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 54: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 55: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 56: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 57: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 58: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 59: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 60: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 61: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 62: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 63: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 64: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 65: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 66: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 67: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 68: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 69: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 70: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 71: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 72: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 73: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 74: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 75: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 76: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 77: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 78: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 79: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 80: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 81: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 82: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 83: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 84: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 85: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 86: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 87: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 88: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 89: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 90: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p> Answer 91: <p>If this is at a beginning level:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$; this is a fraction whose numerator is $3$ choose $0$ times $17$ choose $2$, and whose denominator is $20$ choose $2$. This evaluates to the fraction $68$ over $95$.</p> </blockquote> <p>If the students are a bit more advanced, you can compress it a bit:</p> <blockquote> <p>$f$ of zero equals the probability that $X$ equals $0$, which is $3$ choose $0$ times $17$ choose $2$ all over $20$ choose $2$, which simplifies to $68$ over $95$.</p> </blockquote> Answer 92: <p>There are so many different ways that you can do this. Fractions can be read as "[something] divided by [something else]", "numerator: [something] denominator: [something else]" or variations thereupon. Each version has its benefits, so I recommend experimenting with it. As for parenthesis, reading them out loud is not always necessary. In your example, you can get away with not reading them. Nonetheless, I think pauses after each function is helpful to the listener. Here is my recommendation:</p> <blockquote> <p>"$f$ of $0$ [pause] equals the probability that $X$ equals $0$ [pause] equals $3$ chose $0$ [pause] times $17$ chose $2$ [pause] divided by $20$ chose $2$ [pause] equals $68$ $95^{\text{ths}}$"</p> </blockquote> Answer 93: <p>I would read it as the probability that X equals zero, then three choose zero times seventeen choose two divided by twenty choose two.</p> Answer 94: <p>"The probability that $X=0$ is equal to $3$ choose $0$ times $17$ choose $2$ in the numerator devided by $20$ choose $2$ in the denominator, which is equal to $68$ over $95$".</p> Answer 95: <p>I feel that reading it like: $f (0)$ equals the probability of $X=0$ given by $3$ choose $2$ times $17$ choose $2$ divided by $20$ choose $2$ which equals $68$ over $95$. Hope it helps. </p>
Constructing two matrices that do not commute
2,080,245
9
https://math.stackexchange.com/questions/2080245/constructing-two-matrices-that-do-not-commute
<p>I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$. </p> <p>I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.</p> <p>Edit: looking for some simple way</p>
[ { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" }, { "answer_id": 2080271, "score": 7, "text": "<p>A whole family:\n$$ A=\\begin{bmatrix}0&amp;a\\\\0&amp;a'\\end{bmatrix}, \\quad B=\\begin{bmatrix}b&amp;b'\\\\0&amp;0\\end{bmatrix}$$\nwith non-orthogonal vectors $(a,a')$ and $(b, b')$.</p>\n" }, { "answer_id": 2080273, "score": 6, "text": "<p>The problem reduces to solving a system of linear equations.</p>\n\n<p>Let</p>\n\n<p>$$x=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; d \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$y=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n g &amp; h \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>Now solve the 4 linear equations resulting from $x.y=0$</p>\n\n<p>The solution with the maximum number of free parameters (which is the one with $a\\neq 0,b\\neq 0$) is</p>\n\n<p>$$xs=\\left(\n\\begin{array}{cc}\n a &amp; b \\\\\n c &amp; \\frac{b c}{a} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys=\\left(\n\\begin{array}{cc}\n e &amp; f \\\\\n -\\frac{a e}{b} &amp; -\\frac{a f}{b} \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>it contains 5 free parameters and the products are</p>\n\n<p>$$xs.ys = \\left(\n\\begin{array}{cc}\n 0 &amp; 0 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$ys.xs = \\left(\n\\begin{array}{cc}\n a e+c f &amp; b e+\\frac{b c f}{a} \\\\\n -\\frac{e a^2}{b}-\\frac{c f a}{b} &amp; -a e-c f \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>The choice </p>\n\n<p>$$\\{a\\to 1,b\\to 1,c\\to 0,e\\to 1,f\\to 0\\}$$</p>\n\n<p>gives</p>\n\n<p>$$xx = \\left(\n\\begin{array}{cc}\n 1 &amp; 1 \\\\\n 0 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>$$yy = \\left(\n\\begin{array}{cc}\n 1 &amp; 0 \\\\\n -1 &amp; 0 \\\\\n\\end{array}\n\\right)$$</p>\n\n<p>EDIT</p>\n\n<p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p>\n" }, { "answer_id": 2080248, "score": 5, "text": "<p>$A = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 0 \\end{bmatrix}$, $B = \\begin{bmatrix} 0 &amp; 0 \\\\ 1 &amp; 0\\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p>\n" }, { "answer_id": 2080296, "score": 5, "text": "<p>$\\newcommand{\\im}{\\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p>\n\n<p>Will start with a $2 \\times 2$ example, and then generalize.</p>\n\n<p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p>\n\n<p>Now consider the underlying linear maps $\\alpha, \\beta$. We will have $\\dim(\\ker(\\beta)) = \\dim(\\im(\\beta)) = 1$. </p>\n\n<p>Consider first the case when $\\ker(\\beta) \\ne \\im(\\beta)$, so that if you choose non-zero vectors in $e_1 \\in \\ker(\\beta)$ and $e_2 \\in \\im(\\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have\n$$ B =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n0 &amp; \\lambda\\\\\n\\end{bmatrix}\n$$\nwith $\\lambda \\ne 0$.</p>\n\n<p>If you want $A B = 0$, you need $A$ to be of the form\n$$A =\n\\begin{bmatrix}\nx &amp; 0\\\\\ny &amp; 0\\\\\n\\end{bmatrix}\n$$\nor considering $\\alpha$, you need $\\alpha(e_2) = 0$, so that $\\alpha(\\im(\\beta)) = 0$. Now compute\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; 0\\\\\n\\lambda y &amp; 0\\\\\n\\end{bmatrix},\n$$\nto see that $B A \\ne 0$ iff $y \\ne 0$. So you might as well choose $\\lambda = y = 1$, $x = 0$.</p>\n\n<p>When $\\ker(\\beta) = \\im(\\beta)$, let $e_1 \\in \\ker(\\beta)$ be non-zero, and choose $e_2 \\notin \\ker(\\beta)$ to get\n$$ B =\n\\begin{bmatrix}\n0 &amp; \\lambda\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nfor some non-zero $\\lambda$. For $A B = 0$ we need\n$$A =\n\\begin{bmatrix}\n0 &amp; x\\\\\n0 &amp; y\\\\\n\\end{bmatrix}\n$$\nor $\\alpha(e_1) = 0$. Now\n$$\nB A =\n\\begin{bmatrix}\n0 &amp; \\lambda y\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n$$\nso again choose $y \\ne 0$.</p>\n\n<p>If you want an $n \\times n$ example, with $n \\ge 2$, take $n \\times n$ block matrices\n$$\n\\begin{bmatrix}\nA &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix},\n\\begin{bmatrix}\nB &amp; 0\\\\\n0 &amp; 0\\\\\n\\end{bmatrix}\n$$\nwhere $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p>\n" }, { "answer_id": 2080263, "score": 4, "text": "<p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p>\n" }, { "answer_id": 2080317, "score": 2, "text": "<p>Pick $\\mathrm u, \\mathrm v, \\mathrm w \\in \\mathbb R^n \\setminus \\{0_n\\}$ such that $\\neg (\\mathrm u \\perp \\mathrm v)$ and $\\mathrm v \\perp \\mathrm w$. Define</p>\n\n<p>$$\\mathrm A := \\mathrm u \\mathrm v^{\\top} \\qquad \\qquad \\qquad \\mathrm B := \\mathrm w \\mathrm v^{\\top}$$</p>\n\n<p>whose traces are</p>\n\n<p>$$\\mbox{tr} (\\mathrm A) = \\mathrm v^{\\top} \\mathrm u \\neq 0 \\qquad \\qquad \\qquad \\mbox{tr} (\\mathrm B) = \\mathrm v^{\\top} \\mathrm w = 0$$</p>\n\n<p>Hence</p>\n\n<p>$$\\mathrm A \\mathrm B = \\mathrm u \\underbrace{\\mathrm v^{\\top} \\mathrm w}_{= \\mbox{tr} (\\mathrm B)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm u \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm B) \\cdot \\mathrm A = \\mathrm O_n$$</p>\n\n<p>$$\\mathrm B \\mathrm A = \\mathrm w \\underbrace{\\mathrm v^{\\top}\\mathrm u}_{= \\mbox{tr} (\\mathrm A)} \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm w \\mathrm v^{\\top} = \\mbox{tr} (\\mathrm A) \\cdot \\mathrm B \\neq \\mathrm O_n$$</p>\n" }, { "answer_id": 2080389, "score": 2, "text": "<p>Yet another more powerful (and maybe more \"advanced\"):</p>\n\n<p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\\text{vec}(B) - M_{AL}\\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p>\n\n<p>We can now first pick a vectorization $\\text{vec}$ and an objective matrix $C$ and try to solve $$\\min_B\\{\\|(M_{AR}-M_{AL})\\text{vec}(B)-\\text{vec}(C)\\|\\}$$</p>\n\n<p>Where $C = AB-BA$ is something we can choose.</p>\n" }, { "answer_id": 2080767, "score": 2, "text": "<p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p>\n\n<p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p>\n\n<p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>\n" } ]
Question: Constructing two matrices that do not commute Answer 1: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 2: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 3: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 4: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 5: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 6: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 7: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 8: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 9: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 10: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 11: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 12: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 13: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 14: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 15: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 16: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 17: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 18: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 19: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 20: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 21: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 22: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 23: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 24: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 25: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 26: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 27: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 28: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 29: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 30: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 31: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 32: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 33: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 34: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 35: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 36: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 37: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 38: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 39: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 40: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 41: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 42: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 43: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 44: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 45: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 46: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 47: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 48: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 49: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 50: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 51: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 52: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 53: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 54: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 55: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 56: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 57: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 58: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 59: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 60: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 61: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 62: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 63: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 64: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 65: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 66: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 67: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 68: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 69: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 70: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 71: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 72: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 73: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 74: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 75: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 76: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 77: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 78: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 79: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 80: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 81: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 82: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 83: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 84: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 85: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 86: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 87: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 88: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 89: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 90: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 91: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 92: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 93: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 94: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 95: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 96: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 97: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 98: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 99: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 100: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 101: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 102: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 103: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 104: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 105: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 106: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 107: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 108: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 109: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 110: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 111: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 112: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 113: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 114: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 115: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 116: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 117: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 118: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 119: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 120: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 121: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 122: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 123: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 124: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 125: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 126: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 127: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 128: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 129: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 130: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 131: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 132: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 133: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 134: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 135: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 136: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 137: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 138: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 139: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 140: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 141: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 142: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 143: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 144: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p> Answer 145: <p>A whole family: $$ A=\begin{bmatrix}0&amp;a\\0&amp;a'\end{bmatrix}, \quad B=\begin{bmatrix}b&amp;b'\\0&amp;0\end{bmatrix}$$ with non-orthogonal vectors $(a,a')$ and $(b, b')$.</p> Answer 146: <p>The problem reduces to solving a system of linear equations.</p> <p>Let</p> <p>$$x=\left( \begin{array}{cc} a &amp; b \\ c &amp; d \\ \end{array} \right)$$</p> <p>$$y=\left( \begin{array}{cc} e &amp; f \\ g &amp; h \\ \end{array} \right)$$</p> <p>Now solve the 4 linear equations resulting from $x.y=0$</p> <p>The solution with the maximum number of free parameters (which is the one with $a\neq 0,b\neq 0$) is</p> <p>$$xs=\left( \begin{array}{cc} a &amp; b \\ c &amp; \frac{b c}{a} \\ \end{array} \right)$$</p> <p>$$ys=\left( \begin{array}{cc} e &amp; f \\ -\frac{a e}{b} &amp; -\frac{a f}{b} \\ \end{array} \right)$$</p> <p>it contains 5 free parameters and the products are</p> <p>$$xs.ys = \left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$ys.xs = \left( \begin{array}{cc} a e+c f &amp; b e+\frac{b c f}{a} \\ -\frac{e a^2}{b}-\frac{c f a}{b} &amp; -a e-c f \\ \end{array} \right)$$</p> <p>The choice </p> <p>$$\{a\to 1,b\to 1,c\to 0,e\to 1,f\to 0\}$$</p> <p>gives</p> <p>$$xx = \left( \begin{array}{cc} 1 &amp; 1 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>$$yy = \left( \begin{array}{cc} 1 &amp; 0 \\ -1 &amp; 0 \\ \end{array} \right)$$</p> <p>EDIT</p> <p>As to the number of free parameters: to begin with we have 8 parameters, the elemnts of the two matrices. The 4 euqations would reduce them to 8-4 = 4, but as at most three equations are independent, the number is 5.</p> Answer 147: <p>$A = \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}$, $B = \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0\end{bmatrix}.$ When thinking of examples try to think of most trivial ones like $O$, $I_n$ etc. These are also kind of trivial right?</p> Answer 148: <p>$\newcommand{\im}{\mathrm{im}}$Perhaps it is worth recording a strategy to construct such an example, which may turn useful in other circumstances.</p> <p>Will start with a $2 \times 2$ example, and then generalize.</p> <p>First of all, both matrices have to have rank $1$. For, if $A$ has rank $2$, that is, it is invertibile, then $A B = 0$ implies $B = 0$ so that $B A = 0$ as well. If $A$ has rank zero, then $A = 0$, and both products are zero.</p> <p>Now consider the underlying linear maps $\alpha, \beta$. We will have $\dim(\ker(\beta)) = \dim(\im(\beta)) = 1$. </p> <p>Consider first the case when $\ker(\beta) \ne \im(\beta)$, so that if you choose non-zero vectors in $e_1 \in \ker(\beta)$ and $e_2 \in \im(\beta)$ you will get a basis of the underlying vector space. With respect to this basis you have $$ B = \begin{bmatrix} 0 &amp; 0\\ 0 &amp; \lambda\\ \end{bmatrix} $$ with $\lambda \ne 0$.</p> <p>If you want $A B = 0$, you need $A$ to be of the form $$A = \begin{bmatrix} x &amp; 0\\ y &amp; 0\\ \end{bmatrix} $$ or considering $\alpha$, you need $\alpha(e_2) = 0$, so that $\alpha(\im(\beta)) = 0$. Now compute $$ B A = \begin{bmatrix} 0 &amp; 0\\ \lambda y &amp; 0\\ \end{bmatrix}, $$ to see that $B A \ne 0$ iff $y \ne 0$. So you might as well choose $\lambda = y = 1$, $x = 0$.</p> <p>When $\ker(\beta) = \im(\beta)$, let $e_1 \in \ker(\beta)$ be non-zero, and choose $e_2 \notin \ker(\beta)$ to get $$ B = \begin{bmatrix} 0 &amp; \lambda\\ 0 &amp; 0\\ \end{bmatrix} $$ for some non-zero $\lambda$. For $A B = 0$ we need $$A = \begin{bmatrix} 0 &amp; x\\ 0 &amp; y\\ \end{bmatrix} $$ or $\alpha(e_1) = 0$. Now $$ B A = \begin{bmatrix} 0 &amp; \lambda y\\ 0 &amp; 0\\ \end{bmatrix}, $$ so again choose $y \ne 0$.</p> <p>If you want an $n \times n$ example, with $n \ge 2$, take $n \times n$ block matrices $$ \begin{bmatrix} A &amp; 0\\ 0 &amp; 0\\ \end{bmatrix}, \begin{bmatrix} B &amp; 0\\ 0 &amp; 0\\ \end{bmatrix} $$ where $A, B$ are as above, and $0$ are zero matrices of the appropriate size.</p> Answer 149: <p>We can think of constructing such matrices by using the Rank-Nullity theorem. Note that one matrix should be singular by taking the determinant on both sides. Now choose columns arbitrarily in the singular matrix $A$ such that the column space doesn't equal dimension of full space, i.e, choose at least one less linearly independent column than the order of matrix. But we must choose atleast one non-zero vector. For the other matrix $B$ choose columns which are exactly in the null space of the first matrix. Then, we would get $AB=O$, whereas by virtue of there being some linearly independent vectors in $A$, we would get $BA\neq O$. I think taking examples in second order matrices and generalizing using block matrices would as Andreas answer has done it above.</p> Answer 150: <p>Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define</p> <p>$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$</p> <p>whose traces are</p> <p>$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$</p> <p>Hence</p> <p>$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$</p> <p>$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$</p> Answer 151: <p>Yet another more powerful (and maybe more "advanced"):</p> <p>Let A be a given matrix, now $AB-BA$ can be rewritten with help of matrix multiplication and vectorization as: $M_{AR}\text{vec}(B) - M_{AL}\text{vec}(B)$, where $M_{AR}$ is a matrix performing multiplication from the Right by $A$ and $M_{AL}$ from Left by $A$.</p> <p>We can now first pick a vectorization $\text{vec}$ and an objective matrix $C$ and try to solve $$\min_B\{\|(M_{AR}-M_{AL})\text{vec}(B)-\text{vec}(C)\|\}$$</p> <p>Where $C = AB-BA$ is something we can choose.</p> Answer 152: <p>There is a quite easy general way to get an example. Saying $AB=0$ means that the image space of $B$ must be contained in the kernel of $A$, while $BA\neq0$ means the image of $A$ is not contained in the kernel of $B$. Clearly you won't get an example by choosing $A$ or $B$ to be zero or invertible, so their images and kernels have to be proper nonzero subspaces of the whole space $V=K^n$ these matrices act upon.</p> <p>Now the rank-nullity theorem ties the dimensions of the kernel and image of the same matrix to each other (their sum must be$~n$), but that is <em>the only</em> inevitable relation. So you can start with a simple example for which $AB=0$, and if it happens that $BA=0$ too, you can modify say $B$ in such a way that you keep its image space unchanged, but modify its kernel (to not contain the image of $A$). For instance, let $A$ be a projection on some subspace $U$ of $V$ parallel to a complementary subspace $W$. To keep things simple take $U$ the span of an initial $k$ standard basis vectors and $W$ the span of the remaining $n-k$ vectors, so $A$ is diagonal with first $k$ diagonal entries equal to$~1$ and the remaining $n-k$ diagonal entries equal to$~0$. You can assure $AB=0$ by having $B$ be a projection on the subspace $W$ (which is the kernel of $A$). Now the kernel of $B$ has dimension$~k$, and we want this to <em>not contain</em>$~U$, image of $A$ which does have dimension$~k$ (so we just need those subspaces to differ). That excludes the most obvious choice of taking for$~B$ the projection$~P$ parallel to the complement $U$ of$~W$ (that's the diagonal matrix with entries $0$ where $A$ has$~1$ and vice versa).</p> <p>There are at least two ways to fix this. One is to simply choose a different complement $U'$ of$~W$. This is easy to achieve, for instance by just changing one of the spanning (standard basis) vectors of $U$ to have a nonzero coordinate for one of the (final standard basis) vectors spanning $W$. An easier and more concrete approach is to take the (diagonal) matrix of $P$, and perform some column operation(s) on it, which will not change the image (column space), but it will in general change the kernel. For instance just permute some of the zero columns with nonzero columns, which will work fine. Quite likely the result will no longer be a projection matrix at all, so this is a more radical change than obtained by projecting parallel to a different complementary subspace.</p>
Finite set theory and the Axiom of the Power Set
2,080,958
9
https://math.stackexchange.com/questions/2080958/finite-set-theory-and-the-axiom-of-the-power-set
<p>I've been thinking about set theory recently, and one of the things I noticed was that if we restrict to finite sets, then the power set can be constructed through repeated application of the axioms of separation, pairing, and union. And that made me wonder how ZFC looks when we replace the Axiom of Infinity with its negation.</p> <p>So I have a few different questions. For this question, "finite set theory" refers to the Axioms of Extensionality, Empty Set, Pairing, Union, Separation, and the negation of Infinity.</p> <ul> <li><p>Can finite set theory to prove that no infinite sets exist? The negation of infinity cuts off the cumulative hierarchy at $V_\omega$, which might be enough given Foundation, but it's not entirely clear to me that the nonexistence of an inductive set implies the nonexistence of any infinite set.</p></li> <li><p>Of course, the cumulative hierarchy requires the power set operation, so this raises my initial question. Is the Axiom of Power Set a theorem of finite set theory?</p></li> <li><p>Are the Axioms of Replacement and Choice theorems of finite set theory? Since they both involve functions, a related question is whether functions can be constructed in finite set theory without the power set operation.</p></li> <li><p>Given responses below, it seems like the Axiom of Foundation may be more important than I thought. How does it affect the answers to these if added to "finite set theory"?</p></li> </ul>
[ { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" }, { "answer_id": 2081090, "score": 7, "text": "<p>Here is a model of your \"finite set theory\" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\\{\\emptyset,\\{\\emptyset\\},\\{\\{\\emptyset\\}\\},\\{\\{\\{\\emptyset\\}\\}\\},\\dots\\}$ and let $M$ be the closure of $V_\\omega\\cup\\{A\\}$ under Pairing, Union, and taking subsets (so if $X\\in M$ and $Y\\subseteq X$ then $Y\\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p>\n\n<p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\\mathcal{P}(A)\\not\\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\\to\\omega$ can be implemented in $M$, so Replacement would imply $\\omega$ is a set.</p>\n\n<p>This model does satisfy Choice in the form \"if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them\" (since $M$ contains all subsets of $\\bigcup X$). It does not satisfy Choice in the form \"if $X$ is a set of nonempty sets then there exists a choice function $X\\to\\bigcup X$\", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\\setminus\\{\\emptyset\\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p>\n" }, { "answer_id": 2081012, "score": 6, "text": "<p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\\omega$-model $M$ of ZF-Foundation+\"there is an infinite set $A$ of Quine atoms\" (a Quine atom is a set satisfying $x=\\{x\\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\\omega^M\\cup A\\cup\\{[A]^n: n\\in\\mathbb{N}\\}$$ under Pairing, Union, and Separation. <em>Here \"$[X]^k$\" denotes the set of $k$-element subsets of $X$.</em></p>\n\n<p>$W$ satisfies finite set theory, but not Powerset ($\\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p>\n\n<p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p>\n\n<hr>\n\n<p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\\omega^M\\cup A\\cup\\mathcal{P}(A)\\cup\\{[\\mathcal{P}(A)]^n: n\\in\\mathbb{N}\\}$$ instead, where $M$ satisfies \"$A$ is an <strong>amorphous</strong> set of Quine atoms\", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>\n" } ]
Question: Finite set theory and the Axiom of the Power Set Answer 1: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 2: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 3: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 4: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 5: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 6: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 7: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 8: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 9: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 10: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 11: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 12: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 13: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 14: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 15: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 16: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 17: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 18: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 19: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 20: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 21: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 22: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 23: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 24: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 25: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 26: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 27: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 28: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 29: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 30: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 31: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 32: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 33: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 34: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 35: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 36: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p> Answer 37: <p>Here is a model of your "finite set theory" (including Foundation) in which there is an infinite set and Power Set and Replacement fail. Let $A=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\},\{\{\{\emptyset\}\}\},\dots\}$ and let $M$ be the closure of $V_\omega\cup\{A\}$ under Pairing, Union, and taking subsets (so if $X\in M$ and $Y\subseteq X$ then $Y\in M$). It is clear that $M$ satisfies all of your axioms except possibly the negation of Infinity. To prove the negation of Infinity, note that if $X\in M$, then the transitive closure of $X$ contains only finitely many elements of cardinality $&gt;1$ (since this is true of $A$ and for every element of $V_\omega$, and is preserved by taking pairs, unions, and subsets). So $M$ cannot contain any inductive set.</p> <p>However, $M$ does contain an infinite set, namely $A$. It is also clear that $M$ fails to satisfy Power Set, since $M$ contains every subset of $A$ but $\mathcal{P}(A)\not\in M$ (either by the criterion mentioned above, or by noting that every element of $M$ is countable). Replacement also fails, since the usual recursive definition of the obvious bijection $A\to\omega$ can be implemented in $M$, so Replacement would imply $\omega$ is a set.</p> <p>This model does satisfy Choice in the form "if $X$ is a set of disjoint nonempty sets then there is a set that contains one element from each of them" (since $M$ contains all subsets of $\bigcup X$). It does not satisfy Choice in the form "if $X$ is a set of nonempty sets then there exists a choice function $X\to\bigcup X$", basically because it is very hard to construct functions as sets in $M$ (for instance, if $X=A\setminus\{\emptyset\}$, the unique choice function for $X$ would have infinitely many 2-element sets in its transitive closure). Probably it is possible to build a model where any reasonable form of Choice fails, but I don't know how exactly to do that at the moment.</p> Answer 38: <p>Since you don't include Foundation, we can get a very strange model indeed. Namely, consider an $\omega$-model $M$ of ZF-Foundation+"there is an infinite set $A$ of Quine atoms" (a Quine atom is a set satisfying $x=\{x\}$, and the existence of many Quine atoms is consistent with ZF-Foundation). Now look at the structure $W$ which is the closure of $$V_\omega^M\cup A\cup\{[A]^n: n\in\mathbb{N}\}$$ under Pairing, Union, and Separation. <em>Here "$[X]^k$" denotes the set of $k$-element subsets of $X$.</em></p> <p>$W$ satisfies finite set theory, but not Powerset ($\mathcal{P}(A)$ doesn't exist) or Choice (there is no choice function for the set of two-element subsets of $A$). </p> <p>I am not sure whether $W$ satisfies Replacement, but I suspect it <strong>does</strong>, and I don't immediately see how to add a failure of Replacement. Meanwhile, incorporating Foundation seems difficult.</p> <hr> <p>If we define $W$ as the Pairing/Union/Separation-closure of $$V_\omega^M\cup A\cup\mathcal{P}(A)\cup\{[\mathcal{P}(A)]^n: n\in\mathbb{N}\}$$ instead, where $M$ satisfies "$A$ is an <strong>amorphous</strong> set of Quine atoms", then I believe we get a model of finite set theory together with the negation of Replacement and Powerset; powerset fails since we don't have $\mathcal{P}^2(A)$, and Replacement fails since the image of the map sending each subset of $A$ to its cardinality if it is finite, or the cardinality of its complement otherwise, does not exist.</p>
Squares with prescribed trace in p-adic field extension
2,080,996
9
https://math.stackexchange.com/questions/2080996/squares-with-prescribed-trace-in-p-adic-field-extension
<p>Let K be a finite extension of some p-adic field $\mathbb{Q}_p$, and $L|K$ the unramified cubic extension. I have a statement about a certain $K$-algebra being split by this extension which I can prove by general formalism of Brauer groups. Checking the down-to-earth equations I noticed that it would also follow from the following:</p> <blockquote> <p>For every $y \in L$, there is a square $x^2\in L$ such that $Tr(x^2y) = 0$,</p> </blockquote> <p>where Tr is the trace of $L|K$. So I wondered if this is true. In other words, does the (K-)two-dimensional kernel of $Tr( \cdot y)$ contain squares?</p> <p>Note if $y$ is a square itself, we are just asking for a square in $L$ with trace 0. However plausible this seems to me, I can only see one if the extension is a Kummer extension (i.e. $K$ contains third unit roots), in which case there is a generator $x$ of $L|K$ with minimal polynomial $X^3-a$, $a\in K$, and $x^2$ will do as it has minimal polynomial $X^3-a^2$. Since my extension is unramified, some expression of unit roots might do in general, but I don't see it.</p>
[ { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" }, { "answer_id": 2095079, "score": 5, "text": "<p>When $p \\neq 2$, we can prove every $K$-linear map $L \\to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p>\n\n<blockquote>\n <p><strong>Theorem:</strong> If $p \\neq 2$ and $\\alpha \\in L^\\times$, then there exists $\\beta \\in K^\\times$ such that $\\alpha \\beta$ is square.</p>\n</blockquote>\n\n<p>The abelian group $L^\\times$ can be factored into three components</p>\n\n<p>$$ L^\\times \\cong \\mathbb{Z} \\times \\mu^{(p)}(L) \\times U_L^{(1)} $$</p>\n\n<p>where $\\mathbb{Z}$ records the valuation, $\\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\\beta$ by adjusting each of the three components in turn:</p>\n\n<ul>\n<li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\\alpha \\beta$ have even valuation.</p></li>\n<li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\\mu^{(p)}(L) : \\mu^{(p)}(K)] = \\frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\\mu(K)$ are also nonsquare in $\\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\\alpha \\beta$ in $\\mu^{(p)}(L)$ is square.</p></li>\n<li><p>When $p \\neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li>\n</ul>\n" }, { "answer_id": 2110744, "score": 1, "text": "<p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p>\n\n<hr>\n\n<p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p>\n\n<p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \\mathbb{Q}_2$.</p>\n\n<p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p>\n\n<p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \\mapsto \\text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p>\n\n<p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \\to \\left( L^\\times / (L^\\times)^2 \\right)_{\\ker N},$$where $\\ker N$ denotes the set of elements in $L^\\times$ which have norm in $K^\\times$ equal to a square.</p>\n\n<p>You can define an interesting subset of $( L^\\times / (L^\\times)^2 )_{\\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\\times$ with $\\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\\ker N$. Call this subset $S$.</p>\n\n<p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\\times / (L^\\times)^2$.)</p>\n\n<p>So we want to know if the set $S$ equals the whole of $( L^\\times / (L^\\times)^2 )_{\\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p>\n\n<p><a href=\"http://www.math.harvard.edu/~gross/preprints/stable23.pdf\">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p>\n\n<p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p>\n\n<p>The upshot is that the set $S$ equals $( L^\\times / (L^\\times)^2 )_{\\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \\mathbb{Q}_2$ is unramified, the set $( L^\\times / (L^\\times)^2 )_{\\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p>\n\n<p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p>\n\n<blockquote>\n <p>One can show that the resulting map$$η : \\text{H}^1(k, D) \\to \\text{H}^1(k, \\text{SO}(W)) \\cong \\mathbb{Z}/2\\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\\text{H}^1(k, J[2])$ induced from the Weil pairing.</p>\n</blockquote>\n\n<p>Another reference is the following paper of Serre, which studies related (although different) questions:</p>\n\n<p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>\n" } ]
Question: Squares with prescribed trace in p-adic field extension Answer 1: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 2: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 3: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 4: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 5: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 6: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 7: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 8: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 9: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 10: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 11: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 12: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 13: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 14: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 15: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 16: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 17: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 18: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 19: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 20: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 21: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 22: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 23: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 24: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 25: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 26: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 27: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 28: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 29: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 30: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 31: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 32: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 33: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 34: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 35: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 36: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p> Answer 37: <p>When $p \neq 2$, we can prove every $K$-linear map $L \to K$ has a nonzero square in its kernel. We can, in fact, prove the stronger fact that every one-dimensional $K$-subspace of $L$ contains a square, by using</p> <blockquote> <p><strong>Theorem:</strong> If $p \neq 2$ and $\alpha \in L^\times$, then there exists $\beta \in K^\times$ such that $\alpha \beta$ is square.</p> </blockquote> <p>The abelian group $L^\times$ can be factored into three components</p> <p>$$ L^\times \cong \mathbb{Z} \times \mu^{(p)}(L) \times U_L^{(1)} $$</p> <p>where $\mathbb{Z}$ records the valuation, $\mu^{(p)}(L)$ is the group of roots of unity of order relatively prime to $p$, and $U_L^{(1)}$ is the multiplicative group of elements of $L$ that reduce to $1$ in the residue field. We can construct $\beta$ by adjusting each of the three components in turn:</p> <ul> <li><p>Since $L$ is unramified, any uniformizer of $K$ has odd valuation. (actually, since $[L:K]$ is odd, I think that would remain true even if $L$ is ramified) Thus, we can make $\alpha \beta$ have even valuation.</p></li> <li><p>Let $q$ be the order of the residue field of $K$. Since the index $[\mu^{(p)}(L) : \mu^{(p)}(K)] = \frac{q^3-1}{q-1} = q^2 + q + 1$ is odd, all nonsquares in $\mu(K)$ are also nonsquare in $\mu(L)$. Since the product of two nonsquares is square and units have zero valuation, we can additionally ensure the component of $\alpha \beta$ in $\mu^{(p)}(L)$ is square.</p></li> <li><p>When $p \neq 2$, then every element of $U^{(1)}$ is already a square. We can see this, for example, by Hensel's lemma. Thus, we need do nothing further.</p></li> </ul> Answer 38: <p>Disclaimer: I don't actually understand, much less have read, most of the contents of the papers I've mentioned below, just enough to understand they contain answers to the questions posed above.</p> <hr> <p>The questions posed above are interesting. I think I know the answer but I don't think I can give an elementary proof.</p> <p>The short answer is as follows. Counterexamples exist when $p = 2$, and you should be able to write one down explicitly using the explicitly determination of the Hilbert symbol when $K = \mathbb{Q}_2$.</p> <p>The long answer is as follows. Here is a more sophisticated way to think about your question. For now, let $K$ be any field of characteristic $0$ and let $f(X) = X^3 + a X + b$ be an irreducible polynomial over $K$. Let $L = K[X] / (f(X))$ denote the cubic extension described by $f(X)$.</p> <p>If $z$ is any element in $L$, I can consider the quadratic form on $L$ given by $x \mapsto \text{Tr}_{L/K} ( x^2 z / f'(X))$. This is the form you wrote down if you make the substitution $y = z / f'(X)$.</p> <p>This form plays a role in the arithmetic of the elliptic curve given by the equation $E : Y^2 = f(X)$. More precisely, there is an injective map$$E(K) / 2 E(K) \to \left( L^\times / (L^\times)^2 \right)_{\ker N},$$where $\ker N$ denotes the set of elements in $L^\times$ which have norm in $K^\times$ equal to a square.</p> <p>You can define an interesting subset of $( L^\times / (L^\times)^2 )_{\ker N}$ as the set of elements $z$ in $L$ for which the quadratic form $\text{Tr}_{L/K} ( x^2 z / f'(X))$ on $L$ represents $0$; i.e. for which there exists $x$ in $L^\times$ with $\text{Tr}_{L/K} ( x^2 z / f'(X)) = 0$. This is exactly your question, but we are now restricting to those $y$ of the form $z / f'(X)$, where $z$ lies in $\ker N$. Call this subset $S$.</p> <p>When you are studying the arithmetic of the elliptic curve $E$ this way, one of the first things you prove is that $E(K) / 2 E(K)$ lies in the subset $S$. You are asking whether the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$. (In fact you are asking something stronger than this, because you consider the whole of the group $L^\times / (L^\times)^2$.)</p> <p>So we want to know if the set $S$ equals the whole of $( L^\times / (L^\times)^2 )_{\ker N}$, or not. The answer, which involves some Galois cohomology (so at the level of Serre's book on local fields, say), is described in section 6.2 of the paper available here:</p> <p><a href="http://www.math.harvard.edu/~gross/preprints/stable23.pdf">http://www.math.harvard.edu/~gross/preprints/stable23.pdf</a>.</p> <p>In fact, this paper studies hyperelliptic curves and not just elliptic curves; this means one considers extensions $L / K$ of any odd degree (not just cubic extensions).</p> <p>The upshot is that the set $S$ equals $( L^\times / (L^\times)^2 )_{\ker N}$ if we are working in odd residue characteristic (and then both sets have size $1$). In even residue characteristic, and if $K / \mathbb{Q}_2$ is unramified, the set $( L^\times / (L^\times)^2 )_{\ker N}$ has size $4$ and the set $S$ has size $1$. So the answer to your question is no in this case.</p> <p>It isn't necessary to introduce elliptic curves to carry out this argument, but one does need Galois cohomology, Tate duality, etc., i.e. basically all the hard parts of local class field theory. The key point that makes the whole thing work is the sentence in the paper linked above:</p> <blockquote> <p>One can show that the resulting map$$η : \text{H}^1(k, D) \to \text{H}^1(k, \text{SO}(W)) \cong \mathbb{Z}/2\mathbb{Z}$$is an even quadratic form whose associated bilinear form is the cup product on $\text{H}^1(k, J[2])$ induced from the Weil pairing.</p> </blockquote> <p>Another reference is the following paper of Serre, which studies related (although different) questions:</p> <p>Serre, Jean-Pierre, <strong>L'invariant de Witt de la forme $\text{Tr}(x^2)$</strong>, <em>Comment. Math. Helv.</em> 59 (1984), no. 4, 651–676.</p>
Factorization of quartic polynomial.
2,082,195
9
https://math.stackexchange.com/questions/2082195/factorization-of-quartic-polynomial
<p>I want to know other ways of factorization to get quadratic factors in this polynomial:</p> <p>$$x^4+2x^3+3x^2+2x-3$$</p> <p>Thanks in advance for your suggestions. </p> <p>The original polynomial is $$x^6-2x^3-4x^2+8x-3$$ where the two found factors are $(x+1)$ and $(x-1)$ by synthetic division. </p>
[ { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" }, { "answer_id": 2082229, "score": 38, "text": "<p>Note that <span class=\"math-container\">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to\n<span class=\"math-container\">$$\n(x^2 + x + 1)^2 - 4 \\\\\n= (x^2 + x + 1)^2 - 2^2\\\\\n = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2)\n$$</span></p>\n" }, { "answer_id": 2082245, "score": 24, "text": "<p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p>\n\n<p>Another thing we can try (if we don't \"<em>see</em>\" anything) is write </p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d)\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\\equiv x^4+2x^3+3x^2+2x-3$$</p>\n\n<p>and then solve:</p>\n\n<p>\\begin{cases}\na+c=2\\\\\nd+ac+b=3\\\\\nad+bc=2\\\\\nbd=-3\n\\end{cases}</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>Once this answer is getting attention I will explain more. </p>\n\n<p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p>\n\n<p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\\{(-1,3),(1,-3),(-3,1),(3,-1)\\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\\{(1,-1,1,3),(1,3,1,-1)\\}$. But both give us the same factorization.</p>\n" }, { "answer_id": 2084245, "score": 5, "text": "<p><strong>Rational Root Theorem Solution</strong></p>\n\n<p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p>\n\n<p>$$\\begin{split}\nx^2 &amp;= p x + q\\\\ \nx^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\\\\nx^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q)\n\\end{split}$$</p>\n\n<p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p>\n\n<p>$$\\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\\right)x +p^2 q + q^2 + 2 pq + 2 q-3\\tag{1}$$</p>\n\n<p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p>\n\n<p>$$p = -\\frac{2q(q+1)}{q^2+q+3}\\tag{2}$$</p>\n\n<p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p>\n\n<p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\\tag{3}$$</p>\n\n<p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p>\n\n<p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p>\n" }, { "answer_id": 3112402, "score": 3, "text": "<p>A general quartic polynomial <span class=\"math-container\">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the \"depressed\" form </p>\n\n<p><span class=\"math-container\">$$x^4+px^2+qx+r$$</span> by dividing by <span class=\"math-container\">$a$</span> and translating the unknown by <span class=\"math-container\">$\\dfrac b{4a}$</span>.</p>\n\n<p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p>\n\n<p><span class=\"math-container\">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p>\n\n<p>By identification of the coefficients we obtain</p>\n\n<p><span class=\"math-container\">$$\\begin{cases}w+v=p+u^2,\\\\u(w-v)=q\\end{cases}$$</span> and we eliminate <span class=\"math-container\">$v,w$</span> with</p>\n\n<p><span class=\"math-container\">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p>\n\n<p><span class=\"math-container\">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class=\"math-container\">$u^2$</span>. As the cubic coefficient is positive and the value at <span class=\"math-container\">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p>\n" }, { "answer_id": 2777670, "score": 1, "text": "<p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms\nIn this , write 3x^2 as x^2 + 2x^2 and group terms as \n= x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p>\n\n<p>= [x(x+1) +3][x(x+1)-1]</p>\n" } ]
Question: Factorization of quartic polynomial. Answer 1: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 2: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 3: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 4: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 5: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 6: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 7: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 8: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 9: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 10: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 11: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 12: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 13: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 14: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 15: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 16: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 17: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 18: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 19: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 20: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 21: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 22: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 23: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 24: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 25: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 26: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 27: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 28: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 29: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 30: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 31: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 32: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 33: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 34: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 35: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 36: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 37: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 38: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 39: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 40: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 41: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 42: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 43: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 44: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 45: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 46: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 47: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 48: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 49: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 50: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 51: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 52: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 53: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 54: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 55: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 56: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 57: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 58: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 59: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 60: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 61: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 62: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 63: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 64: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 65: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 66: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 67: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 68: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 69: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 70: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 71: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 72: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 73: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 74: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 75: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 76: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 77: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 78: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 79: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 80: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 81: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 82: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 83: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 84: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 85: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 86: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 87: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 88: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 89: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 90: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p> Answer 91: <p>Note that <span class="math-container">$$(x^2 + x + 1)^2 = x^4 + 2x^3 + 3x^2 + 2x + 1$$</span>which means that your polynomial is equal to <span class="math-container">$$ (x^2 + x + 1)^2 - 4 \\ = (x^2 + x + 1)^2 - 2^2\\ = (x^2 + x + 1 - 2)(x^2 + x + 1 + 2) $$</span></p> Answer 92: <p>One may try to use the rational root theorem. But that is not possible in this case because there is no rational root. </p> <p>Another thing we can try (if we don't "<em>see</em>" anything) is write </p> <p>$$(x^2+ax+b)(x^2+cx+d)\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>$$x^4+(a+c)x^3+(d+ac+b)x^2+(ad+bc)x+bd\equiv x^4+2x^3+3x^2+2x-3$$</p> <p>and then solve:</p> <p>\begin{cases} a+c=2\\ d+ac+b=3\\ ad+bc=2\\ bd=-3 \end{cases}</p> <p><strong>EDIT:</strong></p> <p>Once this answer is getting attention I will explain more. </p> <p>Like already said, that is one approach when we don't have any good idea about how to proceed. Solve the system depends how it looks and each system is a new system. However that idea works for many cases that we usually face with.</p> <p>The first idea to solve the system is look for integer solutions. On that way, the best equation to start is $bd=-3$. That give us the possibilities $(b,d)=\{(-1,3),(1,-3),(-3,1),(3,-1)\}$. Now we plug those possibilities in the system and check if we find $a$ and $c$ that solve it. After do that we see that the solutions are $(a,b,c,d)=\{(1,-1,1,3),(1,3,1,-1)\}$. But both give us the same factorization.</p> Answer 93: <p><strong>Rational Root Theorem Solution</strong></p> <p>We can reduce the problem to a straightforward Rational Root Theorem problem as follows. We start with reducing the polynomial modulo $x^2 - p x - q$ where $p$ and $q$ are undetermined integers that we want to choose such that this yields zero. This amounts to putting $x^2 = p x + q$ and higher powers are reduced by multiplying this rule by $x$ and then applying this reduction rule again. So, the polynomial becomes a linear function by applying the substitutions:</p> <p>$$\begin{split} x^2 &amp;= p x + q\\ x^3 &amp;= p x^2 + q x&amp;= (p^2 + q)x + pq\\ x^4 &amp;= p x^3 + q x^2 &amp;= p(p^2+2q)x+q(p^2+q) \end{split}$$</p> <p>The polynomial reduced modulo $x^2 - p x - q$ then becomes:</p> <p>$$\left(p^3 + 2 pq + 2 p^2 +2 q+3p + 2\right)x +p^2 q + q^2 + 2 pq + 2 q-3\tag{1}$$</p> <p>We need to find integers $p$ and $q$ such that the above function is identical zero. We can simplify things by using that the function is then also zero for special values for $x$, allowing us to choose a value for $x$ that simplifies things a lot. Choosing $x = -\frac{q}{p}$ eliminates the $p^3$ term and a few more terms, setting what remains equal to zero yields a linear equation for $p$ in terms of $q$. Expressing $p$ in terms of $q$ yields:</p> <p>$$p = -\frac{2q(q+1)}{q^2+q+3}\tag{2}$$</p> <p>We can then substitute this in the constant term of eq. (1), setting the resulting expression equal to zero yields the equation:</p> <p>$$q^6 + 3 q^5 + 7 q^4 + 10 q^3 - 21 q^2 + 27 q - 27 = 0\tag{3}$$</p> <p>Here we have set the numerator of the resulting rational function equal to zero. Now we know that $q$ must be an integer, therefore the Rational Root Theorem is guaranteed to succeed and inserting any solution for $q$ (for which denominator is not zero) in eq. (2) will always yield a $p$ such that $x^2 - p x - q$ is a factor of the polynomial. It can be easily shown that this method works for general fourth degree polynomials.</p> <p>Note also that applying the rational root theorem to find integer solution can be made very efficient by shifting the variable. If the value of a polynomial for some value $u$ is not zero, then this is the value of the constant term of the polynomial obtained by translating the variable by $u$. So, by factoring the value you just obtained you obtain the root candidates of that translated polynomial, adding $u$ to those values yields the root candidates of the original polynomial. The root candidates must then be on the intersection of the original list and the new list. By repeating this method, the list of root candidates will shrink rapidly. This is why I find the Rational Root Theorem such a powerful tool and that's why at least in my opinion it's worthwhile to reduce problems to Rational Root Theorem problems.</p> Answer 94: <p>A general quartic polynomial <span class="math-container">$ax^4+bx^3+cx^2+dx+e$</span> can be reduced to the "depressed" form </p> <p><span class="math-container">$$x^4+px^2+qx+r$$</span> by dividing by <span class="math-container">$a$</span> and translating the unknown by <span class="math-container">$\dfrac b{4a}$</span>.</p> <p>Now we try the factorization in two quadratic binomials such that the cubic term is missing,</p> <p><span class="math-container">$$(x^2+ux+v)(x^2-ux+w)=x^4+(-u^2+w+v)x^2+u(w-v)x+wv.$$</span></p> <p>By identification of the coefficients we obtain</p> <p><span class="math-container">$$\begin{cases}w+v=p+u^2,\\u(w-v)=q\end{cases}$$</span> and we eliminate <span class="math-container">$v,w$</span> with</p> <p><span class="math-container">$$u^2(w+v)^2-u^2(w-v)^2=4wvu^2,$$</span> i.e.</p> <p><span class="math-container">$$u^2(p+u^2)^2-q^2=4ru^2$$</span> which is cubic in <span class="math-container">$u^2$</span>. As the cubic coefficient is positive and the value at <span class="math-container">$u^2=0$</span> is negative, there is certainly a positive solution, which we can find by means of the Tartaglia formula.</p> Answer 95: <p>Splitting up the x^2 term works well for factorising quartic but one has to be smart in grouping terms In this , write 3x^2 as x^2 + 2x^2 and group terms as = x^2(x^2+2x+1) +2x^2+2x-3 =[x(x+1)]^2 +2x(x+1) -3</p> <p>= [x(x+1) +3][x(x+1)-1]</p>
What does $\frac{x^3}{9}\bigg|_0^1$ mean, and how should it be spoken?
2,082,952
9
https://math.stackexchange.com/questions/2082952/what-does-fracx39-bigg-01-mean-and-how-should-it-be-spoken
<p>$$\frac{x^3}{9}\Bigg|_0^1$$ The vertical line above: what does it mean, and how would I state this whole structure in spoken words, so that a screen reader would be able to read it aloud correctly?</p>
[ { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" }, { "answer_id": 2083010, "score": 5, "text": "<p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p>\n\n<blockquote>\n <p>\"x cubed over nine, evaluated from one to zero.\"</p>\n</blockquote>\n" }, { "answer_id": 2082954, "score": 4, "text": "<p>The vertical line means evaluate it from the top to the bottom.</p>\n\n<p>So say $x^3/9$ from $0$ to $1$.</p>\n\n<p>in general $f(x)\\biggr \\vert^b_a$ would be $f(b)-f(a)$, in this case:</p>\n\n<p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p>\n\n<p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p>\n" }, { "answer_id": 2082974, "score": 1, "text": "<p>It is the standard notation for describing the limits of an integral.</p>\n\n<p>As well mentioned in comments and other answers, $f(x)\\biggr \\vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p>\n\n<p>When you write something like $\\frac{x^3}{9}\\Bigg|_0^1$ you are gonna say it as </p>\n\n<blockquote>\n <p>Evaluate $\\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\\frac{1^3}{9}-\\frac{0^3}{9}$.</p>\n</blockquote>\n" } ]
Question: What does $\frac{x^3}{9}\bigg|_0^1$ mean, and how should it be spoken? Answer 1: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 2: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 3: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 4: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 5: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 6: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 7: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 8: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 9: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 10: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 11: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 12: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 13: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 14: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 15: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 16: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 17: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 18: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 19: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 20: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 21: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 22: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 23: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 24: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 25: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 26: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 27: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 28: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 29: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 30: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 31: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 32: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 33: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 34: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 35: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 36: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 37: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 38: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 39: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 40: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 41: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 42: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 43: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 44: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 45: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 46: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 47: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 48: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 49: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 50: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 51: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 52: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 53: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 54: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote> Answer 55: <p>It looks like people have already told you what to do mathematically. I'm a math tutor, and what I <strong>say</strong> out loud is </p> <blockquote> <p>"x cubed over nine, evaluated from one to zero."</p> </blockquote> Answer 56: <p>The vertical line means evaluate it from the top to the bottom.</p> <p>So say $x^3/9$ from $0$ to $1$.</p> <p>in general $f(x)\biggr \vert^b_a$ would be $f(b)-f(a)$, in this case:</p> <p>You would evaluate it as $(1)^3/9 - (0)^3/9$.</p> <p>Basically just evaluate the expression with $x=$ top limit and the bottom limit, subtract the bottom expression from the top expression.</p> Answer 57: <p>It is the standard notation for describing the limits of an integral.</p> <p>As well mentioned in comments and other answers, $f(x)\biggr \vert^b_a$ is same as $f(b)-f(a)$ and here $a$ and $b$ are called limits of the integral or the integral is said to be over the interval $[a,b]$ .</p> <p>When you write something like $\frac{x^3}{9}\Bigg|_0^1$ you are gonna say it as </p> <blockquote> <p>Evaluate $\frac{x^3}{9}$ over the interval $[0,1]$ or evaluate $\frac{1^3}{9}-\frac{0^3}{9}$.</p> </blockquote>
I don&#39;t understand this permutation problem.
2,083,178
9
https://math.stackexchange.com/questions/2083178/i-dont-understand-this-permutation-problem
<p>I'm trying to resolve this simple problem:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>I think that this is a simple permutation with repetition so i've done this:</p> <p>$$ P = \frac{10!}{8!2!} - 9 $$</p> <p>$9$ are the permutation of the $2$ red beads when they are close to each other. In this way the result is $36$. Anyway the answer should be $$ 10! - 8!2! $$ I don't understand what is wrong , so please let me know. Thank you. </p>
[ { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" }, { "answer_id": 2083195, "score": 8, "text": "<p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p>\n\n<blockquote>\n <p>We have $10$\n beads, of which $8$\n are white and $2$\n are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$\n red beads are always separated?</p>\n</blockquote>\n\n<p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word \"otherwise\" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p>\n\n<blockquote>\n <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p>\n</blockquote>\n\n<p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should \"separate\" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, \"There is no limit.\"</p>\n\n<p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p>\n\n<hr>\n\n<p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p>\n\n<p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \\times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p>\n\n<p>You are right and the book is wrong (in more ways than one.)</p>\n\n<hr>\n\n<p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated \"correct\" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p>\n\n<p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p>\n" }, { "answer_id": 2083190, "score": 4, "text": "<p>Your logic and your answer are correct. There may be some typo in the given answer. </p>\n\n<p>Since the beads are indistinguishable, the possible ways to place them are $$\\dbinom{10}{2}=\\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you \"merge\" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\\frac{10!}{8!2!}-9=36$$ as you have it.</p>\n" }, { "answer_id": 2083202, "score": 4, "text": "<p>If the beads are indistinguishable, then the number of arrangements is:</p>\n\n<p>$$\\frac{(8+2)!}{8!\\times2!}-\\frac{(8+1)!}{8!\\times1!}=36$$</p>\n\n<hr>\n\n<p>If the beads are distinguishable, then the number of arrangements is:</p>\n\n<p>$$(8+2)!-9!\\times2!=31933440$$</p>\n" }, { "answer_id": 2118937, "score": 1, "text": "<p>Another way you can think of it is;</p>\n\n<p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p>\n\n<p>_W_W_W_W_W_W_W_W_</p>\n\n<p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p>\n\n<p>Hope it helps!</p>\n" } ]
Question: I don&#39;t understand this permutation problem. Answer 1: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 2: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 3: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 4: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 5: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 6: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 7: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 8: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 9: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 10: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 11: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 12: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 13: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 14: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 15: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 16: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 17: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 18: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 19: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 20: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 21: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 22: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 23: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 24: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 25: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 26: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 27: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 28: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 29: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 30: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 31: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 32: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 33: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 34: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 35: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 36: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 37: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 38: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 39: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 40: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 41: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 42: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 43: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 44: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 45: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 46: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 47: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 48: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 49: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 50: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 51: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 52: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 53: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 54: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 55: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 56: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 57: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 58: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 59: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 60: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 61: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 62: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 63: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 64: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 65: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 66: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 67: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 68: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 69: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 70: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 71: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 72: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p> Answer 73: <p>The biggest problem here is that the question is not well defined, but sloppily written so as to rely on a great many implicit assumptions. As such the best we can do is guess as to which interpretation the authors intended, and attempt to find an interpretation such that the answer they provide matches the answer we can compute.</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red and otherwise indistinguishable. In how many ways can we place the beads so that $2$ red beads are always separated?</p> </blockquote> <p>The first imprecision: Are the $8$ white beads indistinguishable from each other? Or is it only the $2$ red beads which are indistinguishable from each other? Considering the word "otherwise" it must be that the only distinguishing factor between any beads is color; however this is not even a properly phrased English sentence. Instead we might say:</p> <blockquote> <p>We have $10$ beads, of which $8$ are white and $2$ are red. Beads of the same color are indistinguishable.</p> </blockquote> <p>Next: Note that there are NO limitations placed on how the beads are to be arranged. They could be placed in a circle, or in a pair of straight lines, or we could ship one red bead to New York and the other to Chicago (that should "separate" them sufficiently) and place the remaining $8$ beads into miscellaneous geometric patterns. So the <em>true</em> answer to this riddle <em>as stated</em> is, "There is no limit."</p> <p>I surmise that we are supposed to <em>assume</em> that the beads are to be placed consecutively in a single straight line, and that at least one white bead must be in between the two red beads for an arrangement to be valid. However, it is extremely sloppy for a <em>math problem</em> to be stated in such imprecise terms.</p> <hr> <p>However, <em>if</em> we take the preceding paragraph as the intended, valid interpretation, then the computation is very simple:</p> <p>Place the first red bead in any one of the ten positions, and the other red bead in any one of the remaining nine positions. This is $10 \times 9 = 90$ possibilities to start with. Divide by $2$ to remove double counting, which gives $45$. Subtract the $9$ illegal possibilities that include adjacent red beads, for a final answer of $36$.</p> <p>You are right and the book is wrong (in more ways than one.)</p> <hr> <p>I've considered it, and I can't think of any possible interpretation of the puzzle which would validate the stated "correct" answer from the book. Not even if we change the puzzle drastically and entirely can I come up with that figure.</p> <p>(I hope you're not trying to learn any actual new ideas from this book. It looks likely that you'll be taught them sloppily and incorrectly.)</p> Answer 74: <p>Your logic and your answer are correct. There may be some typo in the given answer. </p> <p>Since the beads are indistinguishable, the possible ways to place them are $$\dbinom{10}{2}=\frac{10!}{8!2!}$$ since you only need to specify where to put the $2$ read beads. Now, indeed, if you "merge" the $2$ read beads in $1$ you have a total of $9$ beads, which you can place in $$\dbinom{9}{1}=9$$ ways, which implies that the correct answer is $$\frac{10!}{8!2!}-9=36$$ as you have it.</p> Answer 75: <p>If the beads are indistinguishable, then the number of arrangements is:</p> <p>$$\frac{(8+2)!}{8!\times2!}-\frac{(8+1)!}{8!\times1!}=36$$</p> <hr> <p>If the beads are distinguishable, then the number of arrangements is:</p> <p>$$(8+2)!-9!\times2!=31933440$$</p> Answer 76: <p>Another way you can think of it is;</p> <p>Suppose you arrange the 8 white beads first. Then there is only 1 way that you can do so. Observe that there are 9 slots (denoted by the _) that I can insert my 2 red beads in such that my red beads are not adjacent to each other.</p> <p>_W_W_W_W_W_W_W_W_</p> <p>All I have to do is choose 2 positions out of the 9 to fit my red beads in, then I would have obtained that answer 36 as well. ( 9C2 = 36 )</p> <p>Hope it helps!</p>
What is the difference between a 1-form and a co-vector?
2,084,505
9
https://math.stackexchange.com/questions/2084505/what-is-the-difference-between-a-1-form-and-a-co-vector
<p>My understanding has been that co-vectors are defined on a (pseudo-) Riemannian manifold as dual vectors with respect to the given metric. However, 1-forms do not need the notion of a metric and have nothing to do with vectors except that one feeds on the other.</p> <p>However, while reading this <a href="https://math.stackexchange.com/a/252198/404320">answer</a> on the local representation of a tautological form (which is <a href="https://en.wikipedia.org/wiki/Tautological_one-form" rel="noreferrer">defined on a general manifold</a> irrespective of the presence of a metric structure), I cannot seem to comprehend the difference between a 1-form and a co-vector, which difference seems important to the answer.</p> <p>Please let me know what I am missing. If possible, please also answer the linked question bearing in mind my lack of comprehension of this difference. Thank you.</p>
[ { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" }, { "answer_id": 2084626, "score": 13, "text": "<p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\\Bbb{R}$.</p>\n\n<p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\\left\\{\\frac{\\partial}{\\partial x^i}\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\\left\\{dx^i\\big|_p\\right\\}_{i=1}^{\\mathrm{dim}(M)}$.</p>\n\n<p>A tangent vector is of the form $$\\sum_iv^i \\frac{\\partial}{\\partial x^i}\\Big|_p,$$</p>\n\n<p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p>\n\n<p>$$\\sum_i\\omega_i dx^i\\Big|_p,$$</p>\n\n<p>again with the $\\omega_i$ real numbers.</p>\n\n<p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p>\n\n<p>$$\\omega=\\sum_i \\omega_i(p)dx^i\\Big|_p$$ </p>\n\n<p>Now the $\\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p>\n\n<p>$$V=\\sum_i v^i(p)\\frac{\\partial}{\\partial x^i}\\Big|_p.$$</p>\n\n<p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p>\n\n<p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p>\n" }, { "answer_id": 3122978, "score": 9, "text": "<p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p>\n\n<p>A co-vector is to a <span class=\"math-container\">$1$</span>-form as a vector is to a vector field.</p>\n\n<p>That is, if you have a <span class=\"math-container\">$1$</span>-form <span class=\"math-container\">$\\alpha$</span> on some manifold <span class=\"math-container\">$M$</span> and you pick any <span class=\"math-container\">$x \\in M$</span>, the value of <span class=\"math-container\">$\\alpha$</span> at <span class=\"math-container\">$x$</span> is a co-vector.</p>\n\n<p>If you think of a co-vector <span class=\"math-container\">$\\alpha|_x$</span> as a map <span class=\"math-container\">$T_x M \\rightarrow \\mathbb{R}$</span>, then you can extend this to thinking of a <span class=\"math-container\">$1$</span>-form as a map <span class=\"math-container\">$\\alpha: \\Gamma (TM) \\rightarrow \\mathcal{C}^{\\infty}(X)$</span>.</p>\n" } ]
Question: What is the difference between a 1-form and a co-vector? Answer 1: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 2: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 3: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 4: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 5: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 6: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 7: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 8: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 9: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 10: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 11: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 12: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 13: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 14: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 15: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 16: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 17: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 18: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 19: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 20: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 21: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 22: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 23: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 24: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 25: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 26: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 27: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 28: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 29: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 30: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 31: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 32: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 33: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 34: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p> Answer 35: <p>A vector at $p$ in a manifold $M$ 'lives' in the tangent space to $p$ of $M$ denoted $T_pM$. A co-vector lives in the dual space to this tangent space $(T_pM)^*$. Usually either the tangent or cotangent space is constructed in a way so as to be a finite dimensional vector space and then the dual space is given in the usual way as linear maps to $\Bbb{R}$.</p> <p>For smooth manifolds you can see the tangent vectors as directional derivatives of smooth functions real valued functions on the manifold, and usually the basis for the tangent space is written as $\left\{\frac{\partial}{\partial x^i}\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$. The $x^i$ are the co-ordinates of some chosen chart that $p$ lies in. Then to agree with older notation (and until the exterior derivative is introduced) the dual basis is notated $\left\{dx^i\big|_p\right\}_{i=1}^{\mathrm{dim}(M)}$.</p> <p>A tangent vector is of the form $$\sum_iv^i \frac{\partial}{\partial x^i}\Big|_p,$$</p> <p>with the $v^i$ real numbers, whilst a cotangent vector or co-vector is of the form </p> <p>$$\sum_i\omega_i dx^i\Big|_p,$$</p> <p>again with the $\omega_i$ real numbers.</p> <p>A differential form (sometimes called a co-vector field) on the other hand, rather than being just at a point is defined on an open set for example, or on possibly the whole manifold.</p> <p>$$\omega=\sum_i \omega_i(p)dx^i\Big|_p$$ </p> <p>Now the $\omega_i$ are (smooth) functions, noticing the point dependence, and for comparison, a vector field in local co-ordinates would look like</p> <p>$$V=\sum_i v^i(p)\frac{\partial}{\partial x^i}\Big|_p.$$</p> <p>As PVAL's comments mentioned, you can think of differential forms and vector fields as sections of the Co-Tangent and Tangent Bundles, and I prefer this because it makes it clear the point dependence, even without local co-ordinates.</p> <p>I also recommend Tu's book 'An Introduction to Manifolds' as I think it's very readable.</p> Answer 36: <p>Evidently (see comment from Nanashi No Gombe under question) this was only a matter of mismatched naming conventions, but here is a TL;DR answer for future readers, which (in my experience) matches the way most everyone uses these terms:</p> <p>A co-vector is to a <span class="math-container">$1$</span>-form as a vector is to a vector field.</p> <p>That is, if you have a <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span> on some manifold <span class="math-container">$M$</span> and you pick any <span class="math-container">$x \in M$</span>, the value of <span class="math-container">$\alpha$</span> at <span class="math-container">$x$</span> is a co-vector.</p> <p>If you think of a co-vector <span class="math-container">$\alpha|_x$</span> as a map <span class="math-container">$T_x M \rightarrow \mathbb{R}$</span>, then you can extend this to thinking of a <span class="math-container">$1$</span>-form as a map <span class="math-container">$\alpha: \Gamma (TM) \rightarrow \mathcal{C}^{\infty}(X)$</span>.</p>
Quadratic equation system $A^2 + B^2 = 5$ and $AB = 2$
2,084,539
9
https://math.stackexchange.com/questions/2084539/quadratic-equation-system-a2-b2-5-and-ab-2
<p>Given a system of equations</p> <blockquote> <p><span class="math-container">$A^2 + B^2 = 5$</span></p> <p><span class="math-container">$AB = 2$</span></p> </blockquote> <p>what is the correct way to solve it?</p> <p>I see immediately that the answers are</p> <ol> <li><span class="math-container">$A=1, B=2$</span></li> <li><span class="math-container">$A=2, B=1$</span></li> <li><span class="math-container">$A=-1, B=-2$</span></li> <li><span class="math-container">$A=-2, B=-1$</span></li> </ol> <p>but I don't understand the correct way of getting there. I have tried to isolate one of the variables and put the resulting expression into one of the equations, but this didn't get me anywhere. What is the correct way of solving this problem?</p>
[ { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" }, { "answer_id": 2084545, "score": 15, "text": "<p>Obviously $A\\neq 0$, then $B=\\frac{2}{A}$. Subtitute into the first equation:</p>\n\n<p>$A^2+\\frac{4}{A^2}=5 \\implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p>\n\n<p>Here we can deduce 4 values of $A$ like your solutions, find $B=\\frac{2}{A}$ and it's done</p>\n" }, { "answer_id": 2084547, "score": 14, "text": "<p>Hint: <span class=\"math-container\">$$A^2+B^2+2AB=(A+B)^2=5+2\\cdot2=9=3^2$$</span>\nThen <span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p>Thus</p>\n<blockquote>\n<p><span class=\"math-container\">$A+B=\\pm3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n</blockquote>\n<p>Case 1)</p>\n<p><span class=\"math-container\">$A+B=3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n<p>Case 2)</p>\n<p><span class=\"math-container\">$A+B=-3$</span></p>\n<p><span class=\"math-container\">$AB=2$</span></p>\n" }, { "answer_id": 2084548, "score": 8, "text": "<p>You have $$A^2+B^2=5 \\tag 1$$ $$AB=2 \\tag 2$$ So, from $(2)$, $B=\\frac 2 A$. Plug in $(1)$ to get $$A^2+\\frac 4{A^2}=5 \\tag 3$$ So, $$A^4-5A^2+4=0 \\tag 4$$ which is a quadratic in $A^2$.</p>\n" }, { "answer_id": 2085028, "score": 8, "text": "<p>$A^2 + B^2=5$ and $ AB=2$</p>\n\n<p>Subtracting the first equation with twice of the second equation leads to $A-B= \\pm 1$.</p>\n\n<p>Adding the first equation with twice the second equation gives $A+B= \\pm 3$.</p>\n\n<p>Solve the four possible sets of equations to get the answers above.</p>\n" }, { "answer_id": 2084554, "score": 4, "text": "<p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation:\n$$x^2-Sx+P=0.$$</p>\n\n<p>Here, you have $P=2$ and\n$$(A+B)^2=A^2+B^2+2AB=9, \\enspace\\text{whence}\\quad S=\\pm3.$$\nThus $A$ and $B$ are the roots of one of the equations:\n$$x^2-3x+2=0,\\qquad x^2+3x+2=0.$$\nIn the present case, there remains to apply the <em>Rational roots theorem</em>.</p>\n" }, { "answer_id": 2085190, "score": 4, "text": "<p>From $AB=2$ we know $A$ and $B$ have the same sign.\nMoreover, if we find a solution for some values $A=A_1$ and $B=B_1$,\nthen $A=-A_1$ and $B=-B_1$ also is a solution.\nSo we can assume initially that $A$ and $B$ are both positive,\nfind all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p>\n\n<p>Similarly, we can assume initially that $A\\geq B$, because all the solutions\nin which $B \\geq A$ can then be produced just by swapping the values of\nthe two variables.</p>\n\n<p>So we now have (with these assumptions) $A \\geq B &gt; 0,$ \n$A^2 + B^2 = 5,$ and $AB = 2.$ Then\n\\begin{align}\n(A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\\\\n(A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\\\\n\\end{align}\nIn each of these equations, the positive square roots of both sides \nmust be equal, and since $A+B&gt;0$ and $A-B\\geq0$ this tells us that\n\\begin{align}\nA + B &amp;= 3, \\\\\nA - B &amp;= 1. \\\\\n\\end{align}\nTherefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$\nThat is, the unique solution (under the assumption $A\\geq B&gt;0$)\nis $(A,B) = (2,1).$</p>\n\n<p>We can now remove the assumptions that $A\\geq B$ and that $B&gt;0,$\nthen use the facts mentioned earlier to derive the remaining solutions:\n$(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$\nor $(A,B) = (-1,-2)$ (changing signs).</p>\n" }, { "answer_id": 2085198, "score": 3, "text": "<p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have\n\\begin{align}\nx+y=5\\\\\nxy=4\n\\end{align}\nand thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial\n$$z^2-5z+4=(z-4)(z-1).$$\nThis gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p>\n" }, { "answer_id": 2085784, "score": 3, "text": "<p><strong>My favorite way</strong>:</p>\n\n<p>Multiply the first equation by $A^2$ ($\\ne0$):</p>\n\n<p>$$A^4+A^2B^2=5A^2.$$</p>\n\n<p>As $A^2B^2=4$, you get a biquadratic equation:</p>\n\n<p>$$A^4-5A^2+4=0.^*$$</p>\n\n<p>Then</p>\n\n<p>$$A^2=\\frac{5\\pm3}{2}=1\\text{ and }4,$$ giving the four solutions</p>\n\n<p>$$A=-2,-1,1,2$$</p>\n\n<p>and corresponding $B=2/A$:</p>\n\n<p>$$B=-1,-2,2,1.$$</p>\n\n<hr>\n\n<p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p>\n" }, { "answer_id": 2085778, "score": 0, "text": "<p>If\n$a^2+b^2 = c$\nand\n$ab = d$\nthen\n$(a+b)^2\n=a^2+2ab+b^2\n= c+2d\n$\nand\n$(a-b)^2\n=a^2-2ab+b^2\n= c-2d\n$.</p>\n\n<p>Therefore\n$a+b = \\sqrt{c+2d}$\nand\n$a-b = \\sqrt{c-2d}$\nso\n$a = \\frac12(\\sqrt{c+2d}+\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\sqrt{c+2d}-\\sqrt{c-2d})\n$.</p>\n\n<p>If $c=5, d=2$,\nthen\n$\\sqrt{c+2d} = 3$\nand\n$\\sqrt{c-2d} = 1$\nso\n$a= 2$\nand\n$b = 1$. </p>\n\n<p>If you want to get\n<em>all</em> the solutions,\nyou can write\n$a = \\frac12(\\pm\\sqrt{c+2d}\\pm\\sqrt{c-2d})\n$\nand\n$b = \\frac12(\\pm\\sqrt{c+2d}\\mp\\sqrt{c-2d})\n$\nfor all possible signs.</p>\n" }, { "answer_id": 2111242, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\nA &amp; = \\root{5}\\cos\\pars{\\theta}\\,,\\quad\nB = \\root{5}\\sin\\pars{\\theta} \\implies\n\\bracks{\\root{5}\\cos\\pars{\\theta}}\\bracks{\\root{5}\\sin\\pars{\\theta}} = 2\n\\\\[5mm]\n\\implies &amp; \\sin\\pars{2\\theta} = {4 \\over 5}\n\\end{align}\nMoreover,\n$$\n\\left\\{\\begin{array}{rcl}\n\\ds{A} &amp; \\ds{=} &amp; \\ds{\\root{5}\\root{1 + \\cos\\pars{2\\theta} \\over 2} =\n\\root{{5 \\over 2}\\bracks{1 + \\root{1 - \\sin^{2}\\pars{2\\theta}}}}} = \\bbx{\\ds{2}}\n\\\\[2mm]\n\\ds{B} &amp; \\ds{=} &amp; \\ds{2 \\over A} = \\bbx{\\ds{1}}\n\\end{array}\\right.\n$$</p>\n\n<blockquote>\n <p>Of course $\\ds{\\pars{A,B} = \\pars{-2,-1}}$ is a solution too !!!.\n And $\\ds{\\pars{A,B} = \\pars{-1,-2}}$. </p>\n</blockquote>\n" } ]
Question: Quadratic equation system $A^2 + B^2 = 5$ and $AB = 2$ Answer 1: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 2: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 3: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 4: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 5: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 6: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 7: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 8: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 9: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 10: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 11: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 12: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 13: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 14: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 15: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 16: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 17: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 18: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 19: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 20: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 21: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 22: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 23: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 24: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 25: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 26: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 27: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 28: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 29: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 30: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 31: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 32: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 33: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 34: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 35: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 36: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 37: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 38: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 39: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 40: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 41: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 42: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 43: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 44: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 45: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 46: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 47: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 48: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 49: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 50: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 51: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 52: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 53: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 54: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 55: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 56: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 57: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 58: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 59: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 60: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 61: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 62: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 63: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 64: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 65: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 66: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 67: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 68: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 69: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 70: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 71: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 72: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 73: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 74: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 75: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 76: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 77: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 78: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 79: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 80: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 81: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 82: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 83: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 84: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 85: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 86: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 87: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 88: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 89: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 90: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 91: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 92: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 93: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 94: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 95: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 96: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 97: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 98: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 99: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 100: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 101: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 102: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 103: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 104: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 105: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 106: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 107: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 108: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 109: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 110: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 111: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 112: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 113: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 114: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 115: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 116: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 117: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 118: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 119: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 120: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 121: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 122: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 123: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 124: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 125: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 126: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 127: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 128: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 129: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 130: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 131: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 132: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 133: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 134: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 135: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 136: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 137: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 138: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 139: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 140: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 141: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 142: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 143: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 144: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 145: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 146: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 147: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 148: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 149: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 150: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 151: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 152: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 153: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 154: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 155: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 156: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 157: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 158: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 159: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 160: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 161: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 162: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 163: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 164: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 165: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 166: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 167: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 168: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 169: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 170: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote> Answer 171: <p>Obviously $A\neq 0$, then $B=\frac{2}{A}$. Subtitute into the first equation:</p> <p>$A^2+\frac{4}{A^2}=5 \implies 0=A^4-5A^2+4=(A^2-1)(A^2-4)$</p> <p>Here we can deduce 4 values of $A$ like your solutions, find $B=\frac{2}{A}$ and it's done</p> Answer 172: <p>Hint: <span class="math-container">$$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$</span> Then <span class="math-container">$A+B=\pm3$</span></p> <p>Thus</p> <blockquote> <p><span class="math-container">$A+B=\pm3$</span></p> <p><span class="math-container">$AB=2$</span></p> </blockquote> <p>Case 1)</p> <p><span class="math-container">$A+B=3$</span></p> <p><span class="math-container">$AB=2$</span></p> <p>Case 2)</p> <p><span class="math-container">$A+B=-3$</span></p> <p><span class="math-container">$AB=2$</span></p> Answer 173: <p>You have $$A^2+B^2=5 \tag 1$$ $$AB=2 \tag 2$$ So, from $(2)$, $B=\frac 2 A$. Plug in $(1)$ to get $$A^2+\frac 4{A^2}=5 \tag 3$$ So, $$A^4-5A^2+4=0 \tag 4$$ which is a quadratic in $A^2$.</p> Answer 174: <p>$A^2 + B^2=5$ and $ AB=2$</p> <p>Subtracting the first equation with twice of the second equation leads to $A-B= \pm 1$.</p> <p>Adding the first equation with twice the second equation gives $A+B= \pm 3$.</p> <p>Solve the four possible sets of equations to get the answers above.</p> Answer 175: <p>This comes down to the classical problem of finding two numbers, given their sum $S$ and their product $P$: they're the roots, if any, of the quadratic equation: $$x^2-Sx+P=0.$$</p> <p>Here, you have $P=2$ and $$(A+B)^2=A^2+B^2+2AB=9, \enspace\text{whence}\quad S=\pm3.$$ Thus $A$ and $B$ are the roots of one of the equations: $$x^2-3x+2=0,\qquad x^2+3x+2=0.$$ In the present case, there remains to apply the <em>Rational roots theorem</em>.</p> Answer 176: <p>From $AB=2$ we know $A$ and $B$ have the same sign. Moreover, if we find a solution for some values $A=A_1$ and $B=B_1$, then $A=-A_1$ and $B=-B_1$ also is a solution. So we can assume initially that $A$ and $B$ are both positive, find all such solutions, and then simply change signs to produce all the remaining solutions (the ones in which $A$ and $B$ are both negative).</p> <p>Similarly, we can assume initially that $A\geq B$, because all the solutions in which $B \geq A$ can then be produced just by swapping the values of the two variables.</p> <p>So we now have (with these assumptions) $A \geq B &gt; 0,$ $A^2 + B^2 = 5,$ and $AB = 2.$ Then \begin{align} (A + B)^2 &amp;= A^2 + 2AB + B^2 = 5 + 2(2) = 9, \\ (A - B)^2 &amp;= A^2 - 2AB + B^2 = 5 - 2(2) = 1. \\ \end{align} In each of these equations, the positive square roots of both sides must be equal, and since $A+B&gt;0$ and $A-B\geq0$ this tells us that \begin{align} A + B &amp;= 3, \\ A - B &amp;= 1. \\ \end{align} Therefore (adding the two equations) $2A = 4,$ so $A = 2,$ so $B = 1.$ That is, the unique solution (under the assumption $A\geq B&gt;0$) is $(A,B) = (2,1).$</p> <p>We can now remove the assumptions that $A\geq B$ and that $B&gt;0,$ then use the facts mentioned earlier to derive the remaining solutions: $(A,B) = (1,2)$ (swapping values), $(A,B) = (-2,-1)$ or $(A,B) = (-1,-2)$ (changing signs).</p> Answer 177: <p>More or less the same as other answers, but not quite: Put $x=A^2$, $y=B^2$, then we have \begin{align} x+y=5\\ xy=4 \end{align} and thus by Viete's formulas we have that $x$, $y$ are the two roots of the quadratic polynomial $$z^2-5z+4=(z-4)(z-1).$$ This gives you four possibilities for $(A,B)$, so as @Hurkyl pointed out in comments, your solutions are all.</p> Answer 178: <p><strong>My favorite way</strong>:</p> <p>Multiply the first equation by $A^2$ ($\ne0$):</p> <p>$$A^4+A^2B^2=5A^2.$$</p> <p>As $A^2B^2=4$, you get a biquadratic equation:</p> <p>$$A^4-5A^2+4=0.^*$$</p> <p>Then</p> <p>$$A^2=\frac{5\pm3}{2}=1\text{ and }4,$$ giving the four solutions</p> <p>$$A=-2,-1,1,2$$</p> <p>and corresponding $B=2/A$:</p> <p>$$B=-1,-2,2,1.$$</p> <hr> <p>$^*$Following @Hurkyl's argument, you can stop there, because a biquadratic equation cannot have more than four roots, and by $AB=2$, there is a single $B$ for a given $A$.</p> Answer 179: <p>If $a^2+b^2 = c$ and $ab = d$ then $(a+b)^2 =a^2+2ab+b^2 = c+2d $ and $(a-b)^2 =a^2-2ab+b^2 = c-2d $.</p> <p>Therefore $a+b = \sqrt{c+2d}$ and $a-b = \sqrt{c-2d}$ so $a = \frac12(\sqrt{c+2d}+\sqrt{c-2d}) $ and $b = \frac12(\sqrt{c+2d}-\sqrt{c-2d}) $.</p> <p>If $c=5, d=2$, then $\sqrt{c+2d} = 3$ and $\sqrt{c-2d} = 1$ so $a= 2$ and $b = 1$. </p> <p>If you want to get <em>all</em> the solutions, you can write $a = \frac12(\pm\sqrt{c+2d}\pm\sqrt{c-2d}) $ and $b = \frac12(\pm\sqrt{c+2d}\mp\sqrt{c-2d}) $ for all possible signs.</p> Answer 180: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} A &amp; = \root{5}\cos\pars{\theta}\,,\quad B = \root{5}\sin\pars{\theta} \implies \bracks{\root{5}\cos\pars{\theta}}\bracks{\root{5}\sin\pars{\theta}} = 2 \\[5mm] \implies &amp; \sin\pars{2\theta} = {4 \over 5} \end{align} Moreover, $$ \left\{\begin{array}{rcl} \ds{A} &amp; \ds{=} &amp; \ds{\root{5}\root{1 + \cos\pars{2\theta} \over 2} = \root{{5 \over 2}\bracks{1 + \root{1 - \sin^{2}\pars{2\theta}}}}} = \bbx{\ds{2}} \\[2mm] \ds{B} &amp; \ds{=} &amp; \ds{2 \over A} = \bbx{\ds{1}} \end{array}\right. $$</p> <blockquote> <p>Of course $\ds{\pars{A,B} = \pars{-2,-1}}$ is a solution too !!!. And $\ds{\pars{A,B} = \pars{-1,-2}}$. </p> </blockquote>
Is a matrix diagonalizable, if one of its eigenvalues is zero?
2,085,085
9
https://math.stackexchange.com/questions/2085085/is-a-matrix-diagonalizable-if-one-of-its-eigenvalues-is-zero
<p>I checked weather the following matrix is diagonalizable.</p> <p>$$A=\begin{bmatrix} 4 &amp; 0 &amp; 4\\ 0 &amp; 4 &amp; 4\\ 4 &amp; 4 &amp; 8 \end{bmatrix}$$</p> <p>And the corresponding eigenvalues were $0$, $4$, $12$.</p> <p>Now if we write the similar diagonal matrix to $A$ it would be,</p> <p>$$D=\begin{bmatrix} 0 &amp; 0 &amp; 0\\ 0 &amp; 4 &amp; 0\\ 0 &amp; 0 &amp; 12 \end{bmatrix}$$</p> <p>which is theoretically not a diagonal matrix.</p> <p>Now is $A$ diagonalizable?</p>
[ { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" }, { "answer_id": 2085089, "score": 14, "text": "<p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p>\n\n<p>Hence your matrix is diagonalizable.</p>\n\n<p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p>\n" }, { "answer_id": 2085180, "score": 6, "text": "<p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p>\n" }, { "answer_id": 2086917, "score": 1, "text": "<p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>\n" } ]
Question: Is a matrix diagonalizable, if one of its eigenvalues is zero? Answer 1: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 2: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 3: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 4: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 5: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 6: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 7: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 8: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 9: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 10: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 11: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 12: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 13: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 14: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 15: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 16: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 17: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 18: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 19: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 20: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 21: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 22: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 23: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 24: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 25: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 26: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 27: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 28: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 29: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 30: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 31: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 32: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 33: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 34: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 35: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 36: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 37: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 38: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 39: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 40: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 41: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 42: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 43: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 44: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 45: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 46: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 47: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 48: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 49: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 50: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 51: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p> Answer 52: <p>$D$ is a diagonal matrix. A square matrix is a diagonal matrix if and only if the off-diagonal entries are $0$.</p> <p>Hence your matrix is diagonalizable.</p> <p>In fact, if the eigenvalues are all distinct, then it is diagonalizable.</p> Answer 53: <p>Every Matrix is diagonalisable if it's eigenvalues are all distinct, no matter the values of the eigenvalue theirselves. Also your second Matrix is diagonal, cause you only have terms on the diagonal, respectively 0, 4 and 12.</p> Answer 54: <p>Would the fact that every real symmetric matrix is diagonalizable come immediately from the fact that given a real inner product space and a self adjoint operator then it has a orthonormal basis of eigenvectors? </p>
What does the denominator in the second derivative mean?
2,085,208
9
https://math.stackexchange.com/questions/2085208/what-does-the-denominator-in-the-second-derivative-mean
<p>This occurred to me a few days ago.</p> <p>We know that the derivative of a function $y=f(x)$ is $\frac{dy}{dx}$. This is because it represents how $y$ changes with $x$, which is the rate of change of $y$, or more specifically, the gradient of a function.</p> <p>Then the second derivative is the rate of change of rate of change, or the rate of change of gradient. Since a general rate of change is $\frac{d}{dx}$, the second derivative is $(\frac{d}{dx})(\frac{dy}{dx})$. Thus, the expanded form is $\frac{d^2y}{dx^2}$.</p> <p>My question is, is the denominator $d(x)^2$ or is it $(dx)^2$? Surely, it would be the latter, because when you expand $(\frac{d}{dx})(\frac{dy}{dx})$, the $(dx)(dx)$ would become $(dx)^2$. But then why is it never written with brackets? I'm sure that would confuse some people and I only realised it myself when I started thinking about the second derivative properly, in terms of what it actually means.</p>
[ { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" }, { "answer_id": 2085227, "score": 5, "text": "<p>You are completely correct. When we write $\\frac{d^2y}{dx^2}$, we really mean to write $\\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p>\n" }, { "answer_id": 2085225, "score": 4, "text": "<p>You're right that it should really be $(dx)^2$ (if you were asking \"why should that be?\", see my answer <a href=\"https://math.stackexchange.com/a/2029239/26369\">here</a>).</p>\n\n<p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p>\n" }, { "answer_id": 3635860, "score": 1, "text": "<p>You are correct that <span class=\"math-container\">$dx^2$</span> actually means <span class=\"math-container\">$(dx)^2$</span>. Or, even better, <span class=\"math-container\">$(\\text{d}x)^2$</span> (note the non-italicized <span class=\"math-container\">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class=\"math-container\">$\\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class=\"math-container\">$\\frac{dy}{dx}$</span> you will notice that <span class=\"math-container\">$\\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class=\"math-container\">$\\frac{d^2y}{dx^2} - \\frac{dy}{dx}\\frac{d^2x}{dx^2}$</span>. Here, <span class=\"math-container\">$d^2y$</span> is a shorthand for <span class=\"math-container\">$\\textrm{d}(\\textrm{d}(y))$</span> and <span class=\"math-container\">$dx^2$</span> is a shorthand for <span class=\"math-container\">$(\\textrm{d}x)^2$</span>. </p>\n\n<p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper \"Extending the Algebraic Manipulability of Differentials\" explores this idea further.</p>\n" } ]
Question: What does the denominator in the second derivative mean? Answer 1: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 2: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 3: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 4: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 5: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 6: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 7: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 8: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 9: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 10: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 11: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 12: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 13: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 14: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 15: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 16: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 17: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 18: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 19: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 20: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 21: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 22: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 23: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 24: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 25: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 26: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 27: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 28: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 29: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 30: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 31: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 32: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 33: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 34: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 35: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 36: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 37: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 38: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 39: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 40: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 41: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 42: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 43: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 44: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 45: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 46: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 47: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 48: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 49: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 50: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 51: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p> Answer 52: <p>You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change.</p> Answer 53: <p>You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer <a href="https://math.stackexchange.com/a/2029239/26369">here</a>).</p> <p>I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead.</p> Answer 54: <p>You are correct that <span class="math-container">$dx^2$</span> actually means <span class="math-container">$(dx)^2$</span>. Or, even better, <span class="math-container">$(\text{d}x)^2$</span> (note the non-italicized <span class="math-container">$d$</span> because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of <span class="math-container">$\frac{d^2y}{dx^2}$</span>. If you actually perform the derivative of <span class="math-container">$\frac{dy}{dx}$</span> you will notice that <span class="math-container">$\frac{dy}{dx}$</span> is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually <span class="math-container">$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$</span>. Here, <span class="math-container">$d^2y$</span> is a shorthand for <span class="math-container">$\textrm{d}(\textrm{d}(y))$</span> and <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$(\textrm{d}x)^2$</span>. </p> <p>Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.</p>
$\frac{(x-1)(x+1)}{ (x+1)} \rightarrow (x-1)$ Domain Change
2,085,521
9
https://math.stackexchange.com/questions/2085521/fracx-1x1-x1-rightarrow-x-1-domain-change
<p>Forgive my ignorance. The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change? I realize it does, but would someone be so kind as to provide an explanation?</p> <hr> <p>$$ \frac{x^2 - 1}{x + 1} \mbox{ is undefined when } x = -1 $$</p> <p>Its domain (the values that can go into the expression) does not include $ -1 $.</p> <p>Now, we can factor $ x^2 - 1 $ into $ (x - 1)(x + 1) $ so we get:</p> <p>$$ \frac{(x - 1)(x + 1)}{(x + 1)} $$</p> <p>It is now tempting to cancel $ (x + 1) $ from top and bottom to produce:</p> <p>$$ x - 1 $$</p> <p>$$ \mbox{Its domain now } \textbf{does} \mbox{ include } -1 \mbox{.} $$</p> <p>But it is now a different function because it has a different domain.</p> <hr> <p>Thanks!</p>
[ { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" }, { "answer_id": 2085526, "score": 7, "text": "<p>The function $f$ given by\n$$\nf(x):=\\frac{(x-1)(x+1)}{x+1}\n$$\nwith values in $\\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\\mathbb{R}\\backslash\\{-1\\}$.</p>\n\n<p>The function $g$ given by\n$$\ng(x):=x-1\n$$\nwith values in $\\mathbb{R}$ is defined for every $x\\in\\mathbb{R}$, so its domain is $\\mathbb{R}$.</p>\n\n<p>Thus, as <em>functions</em>, $f\\neq g$ because their domains are not the same.</p>\n\n<p>Perhaps your confusion comes from the fact that\n$$\n\\lim_{x\\to-1}f(x)=\\lim_{x\\to-1}\\frac{(x-1)(x+1)}{x+1}=\\lim_{x\\to-1}(x-1)=\\lim_{x\\to-1}g(x)\n$$\nBut this is true because the limit of a function as $x\\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\\in\\mathbb{R}\\backslash\\{-1\\}$.</p>\n" }, { "answer_id": 2085528, "score": 6, "text": "<p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p>\n" }, { "answer_id": 2085532, "score": 6, "text": "<p>Consider the three different functions.</p>\n\n<p>The first function is the function that you original function.\n$$f:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$f(x)=\\frac{(x-1)(x+1)}{x+1}$$</p>\n\n<p>The second function is as follows.\n$$g:\\mathbb{R} \\setminus\\{ -1\\} \\rightarrow \\mathbb{R}$$\n$$g(x)=x-1$$</p>\n\n<p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p>\n\n<p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain.\n$$h:\\mathbb{R} \\rightarrow \\mathbb{R}$$\n$$h(x)=x-1$$</p>\n\n<p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p>\n\n<p>You are right that in cancelling out common terms, the domain shouldn't change.</p>\n" }, { "answer_id": 2085537, "score": 5, "text": "<p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p>\n\n<p>For example, let $f(x) = \\dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p>\n\n<p>and let's <em>choose</em> the domain for both to be the set $\\{1,2,3,4,...\\}$ and lets <em>choose</em> the codomain to be the range (which is $\\{0,1,2,...\\}$).</p>\n\n<p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p>\n\n<p>If we choose the domain instead to be $\\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p>\n\n<p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\\mathbb R \\setminus \\{-1\\}$, and to have codomain $\\mathbb R$, then $f$ and $g$ are again the same function.</p>\n" }, { "answer_id": 2085533, "score": 3, "text": "<p>The expression\n$$\\frac{x-1}{2x-2}$$\nis an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p>\n\n<p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality\n$$\\frac{x-1}{2x-2}=\\frac12$$\nis true because $(x-1)\\cdot 2=(2x-2)\\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p>\n\n<p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions\n$$\\frac{P(x)}{Q(x)},\\frac{R(x)}{S(x)}$$\nare equivalent, then for <em>most</em> numbers $a$ the equality between numbers\n$$\\frac{P(a)}{Q(a)}=\\frac{R(a)}{S(a)}$$\nholds.</p>\n\n<p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p>\n\n<p>In other words, the equality between algebraic fractions\n$$\\frac{x-1}{2x-2}=\\frac12$$\nimplies that the equality between numbers\n$$\\frac{a-1}{2a-2}=\\frac12$$\nholds for every number $a$ <strong>different from 1</strong>.</p>\n" }, { "answer_id": 2085545, "score": 2, "text": "<blockquote>\n <p>The below seems 'inconsistent'. If canceling the $(x+1)$\n is 'legal', how does the domain change?</p>\n</blockquote>\n\n<p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p>\n\n<p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p>\n\n<p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p>\n\n<hr>\n\n<p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p>\n\n<p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is \"positive integers.\" Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p>\n\n<hr>\n\n<p>The point is that there is no <em>reason</em> to assume a domain of $\\mathbb{R}$ for the function: $$\\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p>\n\n<p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p>\n\n<p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p>\n\n<hr>\n\n<p><sub>*I am here using the words \"domain\" and \"range\" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a \"function.\"</sub></p>\n" }, { "answer_id": 2085554, "score": 2, "text": "<p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is \"composed\" or \"defined\" (and this is not the formal definition of function) of three parts: two sets and a \"rule\". The two sets are called domain and range and the \"rule\" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the \"greatest\" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write \"the function $x^2$\", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\\mathbb{R}$). </p>\n\n<p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes \"the function $\\frac{(x-1)(x+1)}{x+1}$\", it is considering the function whose expression is $\\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\\mathbb{R}\\backslash \\{-1\\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\\mathbb{R} \\backslash \\{-1\\}$ because they have the same domain and their value is equal at every point of their domain.</p>\n" }, { "answer_id": 2085589, "score": 2, "text": "<p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p>\n\n<p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\\forall x \\in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\\in Y$.</p>\n\n<p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\\subseteq X\\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\\in Y$, it may be associated to one or more $x$'s or none at all).</p>\n\n<p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p>\n\n<p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p>\n\n<p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>\n" } ]
Question: $\frac{(x-1)(x+1)}{ (x+1)} \rightarrow (x-1)$ Domain Change Answer 1: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 2: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 3: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 4: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 5: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 6: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 7: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 8: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 9: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 10: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 11: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 12: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 13: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 14: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 15: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 16: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 17: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 18: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 19: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 20: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 21: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 22: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 23: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 24: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 25: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 26: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 27: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 28: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 29: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 30: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 31: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 32: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 33: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 34: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 35: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 36: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 37: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 38: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 39: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 40: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 41: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 42: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 43: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 44: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 45: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 46: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 47: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 48: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 49: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 50: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 51: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 52: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 53: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 54: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 55: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 56: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 57: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 58: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 59: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 60: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 61: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 62: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 63: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 64: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 65: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 66: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 67: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 68: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 69: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 70: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 71: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 72: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 73: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 74: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 75: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 76: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 77: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 78: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 79: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 80: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 81: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 82: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 83: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 84: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 85: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 86: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 87: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 88: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 89: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 90: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 91: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 92: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 93: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 94: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 95: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 96: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 97: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 98: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 99: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 100: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 101: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 102: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 103: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 104: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 105: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 106: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 107: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 108: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 109: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 110: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 111: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 112: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 113: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 114: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 115: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 116: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 117: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 118: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 119: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 120: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 121: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 122: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 123: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 124: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 125: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 126: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 127: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 128: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 129: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 130: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 131: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 132: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 133: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 134: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 135: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 136: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p> Answer 137: <p>The function $f$ given by $$ f(x):=\frac{(x-1)(x+1)}{x+1} $$ with values in $\mathbb{R}$ can't be defined at $x:=-1$ (because the denominator will be $0$). Hence the domain of $f$ is at most $\mathbb{R}\backslash\{-1\}$.</p> <p>The function $g$ given by $$ g(x):=x-1 $$ with values in $\mathbb{R}$ is defined for every $x\in\mathbb{R}$, so its domain is $\mathbb{R}$.</p> <p>Thus, as <em>functions</em>, $f\neq g$ because their domains are not the same.</p> <p>Perhaps your confusion comes from the fact that $$ \lim_{x\to-1}f(x)=\lim_{x\to-1}\frac{(x-1)(x+1)}{x+1}=\lim_{x\to-1}(x-1)=\lim_{x\to-1}g(x) $$ But this is true because the limit of a function as $x\to-1$ is completely independent of the value of the function at $-1$ and $f(x)=g(x)$ for every $x\in\mathbb{R}\backslash\{-1\}$.</p> Answer 138: <p>Simply put, the first equation's domain does not include $-1$, while the second equation does.</p> Answer 139: <p>Consider the three different functions.</p> <p>The first function is the function that you original function. $$f:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$f(x)=\frac{(x-1)(x+1)}{x+1}$$</p> <p>The second function is as follows. $$g:\mathbb{R} \setminus\{ -1\} \rightarrow \mathbb{R}$$ $$g(x)=x-1$$</p> <p>It is actually the same function as $f$. It is alright to cancel the common terms from both numerator and denominator., as long as you know what you are doing and keeping track of your domain.</p> <p>Now the third function is not exactly the same function as $f$ but it is very close, with an extra point in the domain. $$h:\mathbb{R} \rightarrow \mathbb{R}$$ $$h(x)=x-1$$</p> <p>I think the book isn't very clear as it does not state the domain of the functions explicitly and just look at the formula and take the largest possible subset of real numbers. </p> <p>You are right that in cancelling out common terms, the domain shouldn't change.</p> Answer 140: <p>A function is <strong>not just an expression</strong>. It is a special assignment (a formula, a table, a diagram, etc) together with a <em>chosen</em> domain and a <em>chosen</em> codomain. The choices must make sense however. If the domain is the same, the codomain is the same and $f(x)$ and $g(x)$ coincide for every member $x$ of the domain, <em>only then do we say the functions are equal.</em></p> <p>For example, let $f(x) = \dfrac{x^2-1}{x+1}$ and $g(x) = x-1$.</p> <p>and let's <em>choose</em> the domain for both to be the set $\{1,2,3,4,...\}$ and lets <em>choose</em> the codomain to be the range (which is $\{0,1,2,...\}$).</p> <p>In this case, $f$ and $g$ <strong>are</strong> the same function. For every member $N$ of the domain, $f(N)$ is equal to $g(N)$.</p> <p>If we choose the domain instead to be $\mathbb R$ then we run into a problem. $g$ is happy with $-1$ which now belongs to the domain, it hands you back $g(-1)=-2$. but $f(-1)$ doesn't make sense, so the domain we chose for $f$ doesn't even work. We're broken from the get-go. They can't be the same if the function $f$ can't even exist.</p> <p>Now if we choose the domain for both of them to be <em>everything except for $-1$</em>, sometimes written $\mathbb R \setminus \{-1\}$, and to have codomain $\mathbb R$, then $f$ and $g$ are again the same function.</p> Answer 141: <p>The expression $$\frac{x-1}{2x-2}$$ is an algebraic fraction, that is, a quotient of polynomials (just like the usual fractions are quotients of integers).</p> <p>As an algebraic fraction, the expression above is equivalent to $1/2$. That means that the equality $$\frac{x-1}{2x-2}=\frac12$$ is true because $(x-1)\cdot 2=(2x-2)\cdot 1$. This does not depend on the value you assign to $x$. In fact, this does not depend on if you assign a value to $x$ at all.</p> <p>But since the basic properties of addition and product for polynomials and numbers are the same, it's true that if the algebraic fractions $$\frac{P(x)}{Q(x)},\frac{R(x)}{S(x)}$$ are equivalent, then for <em>most</em> numbers $a$ the equality between numbers $$\frac{P(a)}{Q(a)}=\frac{R(a)}{S(a)}$$ holds.</p> <p>But the equality fails if some of the denominators $Q(a)$ or $S(a)$ is zero. Not only fails to be true. It fails to make sense.</p> <p>In other words, the equality between algebraic fractions $$\frac{x-1}{2x-2}=\frac12$$ implies that the equality between numbers $$\frac{a-1}{2a-2}=\frac12$$ holds for every number $a$ <strong>different from 1</strong>.</p> Answer 142: <blockquote> <p>The below seems 'inconsistent'. If canceling the $(x+1)$ is 'legal', how does the domain change?</p> </blockquote> <p>You are correct; it <em>is</em> inconsistent. <strong>The source you are quoting is badly written (or badly explained).</strong></p> <p>The mathematical fact is that <em>a function may have any arbitrary domain.</em></p> <p>However, if the function is mathematically undefined for certain inputs, then the domain must <em>implicitly</em> exclude those inputs. Thus the domain of $$f(x)=1/x$$ implicitly excludes $x=0$ from the domain.</p> <hr> <p>Strictly speaking, <strong>a function definition is <em>not complete</em> without an <em>explicit</em> expression of the domain.</strong></p> <p>In school textbooks, however, the domain (and range) is usually implicitly assumed to include all types of numbers the student has learned so far (and no others), unless stated otherwise. For example, in a first grade textbook, $$3-7=?$$ is assumed not to have any solution, because the implied domain* (and range) for all arithmetic as of first grade is "positive integers." Likewise $$13/7=?$$ implicitly has no solution in a first grade textbook. Perhaps a more interesting example is that in a 9th grade textbook, $$x^2+9=0$$ implicitly has no solution.</p> <hr> <p>The point is that there is no <em>reason</em> to assume a domain of $\mathbb{R}$ for the function: $$\frac {x^2-1} {x+1}$$ Why should complex numbers be excluded? Unless the domain is actually defined explicitly, it is just an assumption that goes along with whatever level of math textbook you are studying.</p> <p>So the domain is only implicit—and the point is that <strong>canceling the $x+1$ while <em>leaving</em> the domain implicit results in a function with a different domain, which is therefore a different function.</strong></p> <p>However, it is perfectly legal <em>and</em> consistent to cancel out the $x+1$ <em>while retaining the same domain.</em> It simply must be stated <em>explicitly</em> rather than <em>implicitly.</em></p> <hr> <p><sub>*I am here using the words "domain" and "range" loosely, as a first grade mathbook generally doesn't include <em>anything</em> that could be strictly called a "function."</sub></p> Answer 143: <p>First of all, it is very important that you know that a function is not just an expression (in fact, a function need not to have an expression). A function is "composed" or "defined" (and this is not the formal definition of function) of three parts: two sets and a "rule". The two sets are called domain and range and the "rule" is supposed to be an association which assigns to every element of the domain one, and only one, element of the range. What I am trying to say is that expressions such $x^2, x-1$ and many others are not functions, for they have no domain. Now, what is usually asked in exercises is for us to determine the "greatest" set which can be the domain for a given expression, ie, the set of all values for which it is plausible to calculate the value of the expression substituting $x$ by such value. However, when we write "the function $x^2$", we are usually considering the function whose rule is $x^2$ (ie, a that assigns a certain number to its square) and whose domain is the greatest possible (in this case, $\mathbb{R}$). </p> <p>Now let me get to what you actually asked. No, the domain does not change. When your book, writes "the function $\frac{(x-1)(x+1)}{x+1}$", it is considering the function whose expression is $\frac{(x-1)(x+1)}{x+1}$ and whose domain is the greatest possible, which in this case is $\mathbb{R}\backslash \{-1\}$. Now taking a look at this expression, when $x$ is not equal to $-1$, there is no problem in cancelling out $x+1$, so it happens that the value of the function equals $x-1$, when for x different than $-1$. So it happens that your function equals the function whose expression is $x-1$ in $\mathbb{R} \backslash \{-1\}$ because they have the same domain and their value is equal at every point of their domain.</p> Answer 144: <p>This conundrum is a common issue students encounter when working with functions and domain for the first time. It's a symptom of the fact that we don't properly define what a function is from the start:</p> <p>A <em>function</em> in mathematics is a <em>relation</em>, $f$, between a set, $X$, called the domain, and a set $Y$ called the codomain, such that each element of the domain ($\forall x \in X$), is related to exactly one <em>corresponding</em> element of the codomain $f(x)=y\in Y$.</p> <p>(A <em>relation</em>, $R$ is merely a subset of the Cartesian product of a domain with a codomain $R\subseteq X\times Y$, which we view as an association between (some) elements of the domain, $X$, and (some) elements of codomain, $Y$. An $x\in X$ may be associated to more than one $y$ or none at all, likewise, for a given $y\in Y$, it may be associated to one or more $x$'s or none at all).</p> <p>As a sidenote: If you've ever wondered what $f$ itself is, it's actually the set of ordered pairs, $(x,y)$ that specify the associations between each domain element and it's unique corresponding codomain element. The expression $f(x)$ actually denotes what's known as the <em>image under $f$</em> of the domain element $x$ (i.e. the corresponding $y$ in the codomain).</p> <p>This proper definition and attendant concepts are glossed over, or avoided entirely when functions are first introduced, and that pedagogical mistake (imo) comes home to roost when they are then asked about <em>domain</em> (and <em>range</em>, not to be confused with <em>codomain</em>; they are related but not the same thing).</p> <p>Worse yet, in familiar contexts, there is a usually unmentioned convention for denoting functions that is taken. Since one is often interested in functions between the real numbers and itself, or between a subset of the real numbers and the real numbers, and these sets are (usually) all infinite. There exists a bit of a problem in how exactly we hope to capture exactly where each and every domain element is mapped to in the codomain? This is usually handled by the use of a <em>rule of assignment</em> which specifies how we can obtain the unique codomain element $f(x)$ specified by a given choice of a domain element $x$. Familiar examples such as $f(x)=x+3$ and $f(x)=x^2$ and many more are examples of such rules of assignment. These familiar examples use arithmetic operations and (later) other well understood functions to specify the corresponding codomain element for <em>every</em> possible domain element. Some of our arithmetic rules are not defined everywhere in the reals though, for instance one cannot divide by $0$, or take a square root of a negative number (and remain in the reals, at least) etc. Because of this, rules like $1/x$ do not make sense over all of $\mathbb{R}$, thus the convention is that the domain for functions specified by a <em>rule of assignment</em> is the largest possible subset of the reals <em>for which that rule makes sense</em> (or in more formal language: for which that rule <em>is definable</em>). Hence the task that is actually being asked of beginning algebra students is to deduce the largest subset of the reals for which a given arithmetic expression (that is the given function's rule of assignment) actually makes sense. That is the <em>conventionally defined</em> domain of that function.</p>
Is zero to the power of an imaginary number still zero?
2,087,100
9
https://math.stackexchange.com/questions/2087100/is-zero-to-the-power-of-an-imaginary-number-still-zero
<p>I just want to make sure that $0^i = 0$, but for some reason I couldn't find anything about this online.</p> <p>Is this true?</p> <p>--Background--</p> <p>I'm trying to prove that some exponent is zero. I thought I'd raise each side to the power of $i$ so that I could use Euler's formula.</p>
[ { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" }, { "answer_id": 2087136, "score": 12, "text": "<p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\\log(w)} = e^{a\\log(w)}e^{bi\\log(w)}$$</p>\n\n<p>where $z = a +bi.$ For $w = re^{i\\theta},$ this gives\n$$\nw^z = e^{a\\ln(r)}e^{ai\\theta}e^{bi\\ln(r)}e^{-b\\theta}.\n$$\nFor $w = 0,$ we have $r=0$ and $\\theta =$ whatever. $\\ln(0)$ is undefined but we can take the limit $r\\downarrow0$ so that $\\ln(r)\\downarrow-\\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p>\n" }, { "answer_id": 4297912, "score": 0, "text": "<p>Let’s consider the general expression <span class=\"math-container\">$x^i$</span>.\nWe can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)}\n$$</span></p>\n<p>and, according to Euler’s formula <span class=\"math-container\">$e^{i \\theta} = cos(\\theta) + i \\sin(\\theta)$</span>, we can say that:\n<span class=\"math-container\">$$\nx^i = e^{i \\ln(x)} = \\cos(\\ln(x)) + i \\sin(\\ln(x))\n$$</span>\nNow, if <span class=\"math-container\">$x = 0$</span>, we have:\n<span class=\"math-container\">$$\n0^i = \\cos(\\ln(0)) + i \\sin(\\ln(0))\n$$</span></p>\n<p><span class=\"math-container\">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class=\"math-container\">$ln(0) \\to - \\infty$</span>\nTherefore:\n<span class=\"math-container\">$$\n0^i = \\cos(-\\infty) + i \\sin(-\\infty)\n$$</span>\nBut the functions <span class=\"math-container\">$\\sin(x)$</span> and <span class=\"math-container\">$\\cos(x)$</span> don’t have limits when <span class=\"math-container\">$x$</span> approaches to infinite, since they oscillate.\nTherefore, <span class=\"math-container\">$\\cos(-\\infty) + i \\sin(-\\infty)$</span> is undefined, hence <span class=\"math-container\">$0^i$</span> is undefined and it’s not equal to 0.\nFor the same reason <span class=\"math-container\">$\\infty^i$</span> and <span class=\"math-container\">$(-\\infty)^i$</span> are undefined too.</p>\n" } ]
Question: Is zero to the power of an imaginary number still zero? Answer 1: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 2: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 3: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 4: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 5: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 6: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 7: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 8: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 9: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 10: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 11: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 12: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 13: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 14: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 15: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 16: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 17: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 18: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 19: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 20: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 21: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 22: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 23: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 24: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 25: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 26: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 27: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 28: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 29: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 30: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 31: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 32: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 33: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 34: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p> Answer 35: <p>Standard definition of a complex exponent $w^z$ is $$ w^z=e^{z\log(w)} = e^{a\log(w)}e^{bi\log(w)}$$</p> <p>where $z = a +bi.$ For $w = re^{i\theta},$ this gives $$ w^z = e^{a\ln(r)}e^{ai\theta}e^{bi\ln(r)}e^{-b\theta}. $$ For $w = 0,$ we have $r=0$ and $\theta =$ whatever. $\ln(0)$ is undefined but we can take the limit $r\downarrow0$ so that $\ln(r)\downarrow-\infty.$ The limit of the expression for $w^z$ is only zero if $a &gt;0,$ otherwise it blows up or oscillates. In other words if $\Re(z)&gt;0$ then $0^z=0.$ Otherwise it's infinite or undefined.</p> Answer 36: <p>Let’s consider the general expression <span class="math-container">$x^i$</span>. We can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} $$</span></p> <p>and, according to Euler’s formula <span class="math-container">$e^{i \theta} = cos(\theta) + i \sin(\theta)$</span>, we can say that: <span class="math-container">$$ x^i = e^{i \ln(x)} = \cos(\ln(x)) + i \sin(\ln(x)) $$</span> Now, if <span class="math-container">$x = 0$</span>, we have: <span class="math-container">$$ 0^i = \cos(\ln(0)) + i \sin(\ln(0)) $$</span></p> <p><span class="math-container">$ln(0)$</span> doesn’t exist, but, if we take the limit, we can say that <span class="math-container">$ln(0) \to - \infty$</span> Therefore: <span class="math-container">$$ 0^i = \cos(-\infty) + i \sin(-\infty) $$</span> But the functions <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> don’t have limits when <span class="math-container">$x$</span> approaches to infinite, since they oscillate. Therefore, <span class="math-container">$\cos(-\infty) + i \sin(-\infty)$</span> is undefined, hence <span class="math-container">$0^i$</span> is undefined and it’s not equal to 0. For the same reason <span class="math-container">$\infty^i$</span> and <span class="math-container">$(-\infty)^i$</span> are undefined too.</p>
A combinatorial identity with binomial coefficients and floor function.
2,087,559
9
https://math.stackexchange.com/questions/2087559/a-combinatorial-identity-with-binomial-coefficients-and-floor-function
<p><strong>Show that</strong> $$ \sum_{m=0}^{2k+1}{2^{m}\binom{n}{m}\binom{n-m}{\bigl\lfloor \frac{2k+1-m}{2} \bigr\rfloor}}=\binom{2n+1}{2k+1} $$ I tried expending the sum and using induction, but could not complete the induction step; I tried using Pascal's identity to obtain $$ \binom{n-m}{\bigl\lfloor \frac{2k+3-m}{2} \bigr\rfloor}=\binom{n-m+1}{\bigl\lfloor \frac{2k+3-m}{2} \bigr\rfloor}-\binom{n-m}{\bigl\lfloor \frac{2k+1-m}{2} \bigr\rfloor} $$ but couldn't find other identities to complete the induction step. Searching Gould's Combinatorial Identities brought up nothing, though I could easily had missed something useful there.</p> <p>I'm looking to complete the induction step, but would also like to find an algebraic or a combinatorial proof. I would like to avoid using trigonometric and root functions, but would like to use complex numbers in cartesian form. </p>
[ { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" }, { "answer_id": 2088693, "score": 8, "text": "<p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}$ which is the introductory part of this answer.</p>\n\n<p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g.\n\\begin{align*}\n\\binom{n}{q}=[z^q](1+z)^n\\tag{1}\n\\end{align*}\nand to ease comparison with Markos answer I will use his notation. </p>\n\n<blockquote>\n <p><strong>Preliminary:</strong></p>\n \n <p>The following is valid\n \\begin{align*}\n\\binom{n}{\\left\\lfloor \\frac{q}{2}\\right\\rfloor}=[z^qw^n]\\frac{(1+z)(1+w)^n}{1-wz^2}\\tag{2}\n\\end{align*}</p>\n \n <p>With $q$ even, $q\\rightarrow 2q$ we obtain\n \\begin{align*}\n[z^{2q}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\tag{3}\\\\\n&amp;=[w^n](1+w)^nw^q\\tag{4}\\\\\n&amp;=[w^{n-q}](1+w)^n\\tag{5}\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}\n With $q$ odd, $q\\rightarrow 2q+1$ we obtain\n \\begin{align*}\n[z^{2q+1}w^n]\\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\\sum_{j=0}^\\infty w^j z^{2j}\\\\\n&amp;=[w^n](1+w)^nw^q\\\\\n&amp;=[w^{n-q}](1+w)^n\\\\\n&amp;=\\binom{n}{n-q}=\\binom{n}{q}\n\\end{align*}</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li>\n<li><p>In (3) we use the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"noreferrer\">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li>\n<li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li>\n<li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li>\n</ul>\n\n<blockquote>\n <p>We now apply (2) to OPs formula and obtain\n \\begin{align*}\n\\sum_{k=0}^{2m+1}&amp;\\binom{n}{k}2^k\\binom{n-k}{\\left\\lfloor\\frac{2m+1-k}{2}\\right\\rfloor}\\\\\n&amp;=\\sum_{k=0}^n\\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\\tag{6}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{2wz}{1+w}\\right)^k\\tag{7}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w)^n}{1-wz^2}\\left(1+\\frac{2wz}{1+w}\\right)^n\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\frac{(1+w(1+2z))^n}{1-wz^2}\\tag{8}\\\\\n&amp;=[z^{2m+1}](1+z)[w^n]\\sum_{q=0}^n\\binom{n}{q}w^q(1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}[w^{n-q}](1+2z)^q\\sum_{j=0}^\\infty w^jz^{2j}\\\\\n&amp;=[z^{2m+1}](1+z)\\sum_{q=0}^n\\binom{n}{q}(1+2z)^q\\left(z^2\\right)^{n-q}\\\\\n&amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\\\\n&amp;=[z^{2m+1}](1+z)^{2n+1}\\\\\n&amp;=\\binom{2n+1}{2m+1}\n\\end{align*}</p>\n \n <p>and the claim follows.</p>\n</blockquote>\n\n<p><em>Comment:</em></p>\n\n<ul>\n<li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li>\n<li><p>In (7) we collect all factors with exponent $k$.</p></li>\n<li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li>\n</ul>\n" }, { "answer_id": 2089099, "score": 8, "text": "<p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p>\n\n<p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p>\n\n<p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\\ell$ of these $n-m$ two-element subsets (which can be made in $\\binom{n-m}\\ell$ ways) we are adding $2\\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\\ell+m$ elements.</p>\n\n<p>The important thing to note is that at this point the value of $\\ell$ is <strong>determined</strong>, that is, we cannot \"choose\" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\\ell+m+1=2k+1$ implies $\\ell=\\frac{2k-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\\ell+m=2k+1$ implies $\\ell=\\frac{2k+1-m}2=\\bigl\\lfloor\\frac{2k+1-m}2\\bigr\\rfloor$.</p>\n" }, { "answer_id": 2088312, "score": 7, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$\\sum_{k=0}^{2m+1} {n\\choose k} 2^k \n{n-k\\choose \\lfloor (2m+1-k)/2 \\rfloor}\n= {2n+1\\choose 2m+1}.$$</p>\n\n<p>Observe that from first principles we have that</p>\n\n<p>$${n\\choose \\lfloor q/2 \\rfloor} = {n\\choose n-\\lfloor q/2 \\rfloor} =\n\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(1+z\n+wz^2+wz^3\n+w^2z^4+w^2z^5+\\cdots\\right)\n\\; dw \\; dz.$$</p>\n\n<p>This simplifies to</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\left(\\frac{1}{1-wz^2}+z\\frac{1}{1-wz^2}\\right)\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{q+1}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>This correctly enforces the range as the reader is invited to verify\nand we may extend $k$ beyond $2m+1,$ getting for the sum</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\sum_{k\\ge 0} {n\\choose k} 2^k z^k \\frac{w^k}{(1+w)^k}\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\\\ \\times \\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\left(1+\\frac{2wz}{1+w}\\right)^n\n\\; dw \\; dz\n\\\\ = \\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \\frac{1+z}{z^{2m+2}}\n\\frac{1}{2\\pi i} \\int_{|w|=\\gamma} \\frac{(1+w+2wz)^n}{w^{n+1}}\n\\frac{1}{1-wz^2}\n\\; dw \\; dz.$$</p>\n\n<p>Extracting the inner coefficient now yields</p>\n\n<p>$$\\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{2n-2q}\n= z^{2n} \\sum_{q=0}^n {n\\choose q} (1+2z)^q z^{-2q}\n\\\\= z^{2n} \\left(1+\\frac{1+2z}{z^2}\\right)^n\n= (1+z)^{2n}.$$</p>\n\n<p>We thus get from the outer coefficient</p>\n\n<p>$$\\frac{1}{2\\pi i} \\int_{|z|=\\epsilon} \n\\frac{(1+z)^{2n+1}}{z^{2m+2}} \\; dz$$</p>\n\n<p>which is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n{2n+1\\choose 2m+1}}$$</p>\n\n<p>as claimed. I do believe this is an instructive exercise.</p>\n" } ]
Question: A combinatorial identity with binomial coefficients and floor function. Answer 1: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 2: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 3: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 4: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 5: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 6: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 7: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 8: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 9: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 10: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 11: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 12: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 13: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 14: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 15: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 16: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 17: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 18: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 19: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 20: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 21: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 22: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 23: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 24: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 25: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 26: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 27: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 28: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 29: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 30: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 31: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 32: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 33: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 34: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 35: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 36: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 37: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 38: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 39: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 40: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 41: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 42: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 43: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 44: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 45: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 46: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 47: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 48: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 49: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 50: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 51: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p> Answer 52: <p><strong>Note:</strong> This answer is the result of an analysis of the highly instructive and elegant answer by Marko Riedel. One highlight is the useful representation of $\binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}$ which is the introductory part of this answer.</p> <p>We use the <em>coefficient of</em> operator $[z^q]$ to denote the coefficient of $z^q$ in a series. This way we can write e.g. \begin{align*} \binom{n}{q}=[z^q](1+z)^n\tag{1} \end{align*} and to ease comparison with Markos answer I will use his notation. </p> <blockquote> <p><strong>Preliminary:</strong></p> <p>The following is valid \begin{align*} \binom{n}{\left\lfloor \frac{q}{2}\right\rfloor}=[z^qw^n]\frac{(1+z)(1+w)^n}{1-wz^2}\tag{2} \end{align*}</p> <p>With $q$ even, $q\rightarrow 2q$ we obtain \begin{align*} [z^{2q}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q}](1+z)\sum_{j=0}^\infty w^j z^{2j}\tag{3}\\ &amp;=[w^n](1+w)^nw^q\tag{4}\\ &amp;=[w^{n-q}](1+w)^n\tag{5}\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*} With $q$ odd, $q\rightarrow 2q+1$ we obtain \begin{align*} [z^{2q+1}w^n]\frac{(1+z)(1+w)^n}{1-wz^2}&amp;=[w^n](1+w)^n[z^{2q+1}](1+z)\sum_{j=0}^\infty w^j z^{2j}\\ &amp;=[w^n](1+w)^nw^q\\ &amp;=[w^{n-q}](1+w)^n\\ &amp;=\binom{n}{n-q}=\binom{n}{q} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Additionally to (1) we use a second variable $z$ which is used as marker to select via $1+z$ even and odd exponent of $w^j$.</p></li> <li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="noreferrer">geometric series expansion</a></em> and the linearity of the <em>coefficient of</em> operator.</p></li> <li><p>In (4) we select the coefficient of $z^{2q}$ which is $w^q$.</p></li> <li><p>In (5) we apply the rule $[z^p]z^qA(z)=[z^{p-q}]A(z)$.</p></li> </ul> <blockquote> <p>We now apply (2) to OPs formula and obtain \begin{align*} \sum_{k=0}^{2m+1}&amp;\binom{n}{k}2^k\binom{n-k}{\left\lfloor\frac{2m+1-k}{2}\right\rfloor}\\ &amp;=\sum_{k=0}^n\binom{n}{k}2^k[z^{2m+1-k}w^{n-k}]\frac{(1+z)(1+w)^{n-k}}{1-wz^2}\tag{6}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\sum_{k=0}^n\binom{n}{k}\left(\frac{2wz}{1+w}\right)^k\tag{7}\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w)^n}{1-wz^2}\left(1+\frac{2wz}{1+w}\right)^n\\ &amp;=[z^{2m+1}](1+z)[w^n]\frac{(1+w(1+2z))^n}{1-wz^2}\tag{8}\\ &amp;=[z^{2m+1}](1+z)[w^n]\sum_{q=0}^n\binom{n}{q}w^q(1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}[w^{n-q}](1+2z)^q\sum_{j=0}^\infty w^jz^{2j}\\ &amp;=[z^{2m+1}](1+z)\sum_{q=0}^n\binom{n}{q}(1+2z)^q\left(z^2\right)^{n-q}\\ &amp;=[z^{2m+1}](1+z)(1+2z+z^2)^n\\ &amp;=[z^{2m+1}](1+z)^{2n+1}\\ &amp;=\binom{2n+1}{2m+1} \end{align*}</p> <p>and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (6) we apply the formula (2) and we change the upper limit of the sum to $n$ without changing anything, since the formula (2) selects the proper range.</p></li> <li><p>In (7) we collect all factors with exponent $k$.</p></li> <li><p>In (8) observe the factor $(1+w)^n$ cancels nicely.</p></li> </ul> Answer 53: <p>Here is a combinatorial proof. The right hand side counts the number of ways to choose $2k+1$ elements from a set with $2n+1$ elements. Fix a partition of the latter set into $n+1$ disjoint subsets, namely a singleton and $n$ two-element subsets.</p> <p>In order to choose such $2k+1$ elements, first we choose the number $m$ of two-element subsets from which we will select exactly one element. Now we choose such subsets, which can be made in $\binom nm$ ways. Finally, from each one of these $m$ subsets we choose exactly one element; this can be made in $2^m$ ways.</p> <p>It remains to choose the remaining $2k+1-m$ elements among the elements of the singleton and the remaining $n-m$ two-element members of the partition. Because of the previous step, picking an element of one of these two-element subsets forces us to pick the other element as well. Thus, by choosing $\ell$ of these $n-m$ two-element subsets (which can be made in $\binom{n-m}\ell$ ways) we are adding $2\ell$ elements to our $m$ already chosen elements. In other words, at this point we already chose $2\ell+m$ elements.</p> <p>The important thing to note is that at this point the value of $\ell$ is <strong>determined</strong>, that is, we cannot "choose" such value. In fact: if $m$ is even, then we necessarily must pick up the element from the singleton subset, and equality $2\ell+m+1=2k+1$ implies $\ell=\frac{2k-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$; if $m$ is odd then we cannot choose the element from the singleton, and in this case $2\ell+m=2k+1$ implies $\ell=\frac{2k+1-m}2=\bigl\lfloor\frac{2k+1-m}2\bigr\rfloor$.</p> Answer 54: <p>Suppose we seek to prove that</p> <p>$$\sum_{k=0}^{2m+1} {n\choose k} 2^k {n-k\choose \lfloor (2m+1-k)/2 \rfloor} = {2n+1\choose 2m+1}.$$</p> <p>Observe that from first principles we have that</p> <p>$${n\choose \lfloor q/2 \rfloor} = {n\choose n-\lfloor q/2 \rfloor} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(1+z +wz^2+wz^3 +w^2z^4+w^2z^5+\cdots\right) \; dw \; dz.$$</p> <p>This simplifies to</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \left(\frac{1}{1-wz^2}+z\frac{1}{1-wz^2}\right) \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{q+1}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>This correctly enforces the range as the reader is invited to verify and we may extend $k$ beyond $2m+1,$ getting for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \sum_{k\ge 0} {n\choose k} 2^k z^k \frac{w^k}{(1+w)^k} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \\ \times \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w)^n}{w^{n+1}} \frac{1}{1-wz^2} \left(1+\frac{2wz}{1+w}\right)^n \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{2m+2}} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{(1+w+2wz)^n}{w^{n+1}} \frac{1}{1-wz^2} \; dw \; dz.$$</p> <p>Extracting the inner coefficient now yields</p> <p>$$\sum_{q=0}^n {n\choose q} (1+2z)^q z^{2n-2q} = z^{2n} \sum_{q=0}^n {n\choose q} (1+2z)^q z^{-2q} \\= z^{2n} \left(1+\frac{1+2z}{z^2}\right)^n = (1+z)^{2n}.$$</p> <p>We thus get from the outer coefficient</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n+1}}{z^{2m+2}} \; dz$$</p> <p>which is</p> <p>$$\bbox[5px,border:2px solid #00A000]{ {2n+1\choose 2m+1}}$$</p> <p>as claimed. I do believe this is an instructive exercise.</p>
Proving $x^4+y^4=z^2$ has no integer solutions
2,087,679
9
https://math.stackexchange.com/questions/2087679/proving-x4y4-z2-has-no-integer-solutions
<p>I need solution check to see if I overlooked something:</p> <p>If $x^4+y^4=z^2$ has an integer solution then $(\frac{x}{y})^4+1=(\frac{z}{y^2})^2$ has a solution in rationals. </p> <p>Second equation is equivalent to $X^4+1=Y^2$ which can be written as cubic Weierstrass form $$v^2=u^3-4u$$ where $u=\frac{2(Y+1)}{X^2}$ and $v=\frac{4(Y+1)}{X^3}$. By using SAGE we can see that Mordell-Weil group of this elliptic curve is empty (SAGE returns empty set, but I'm not sure if that is enough to conclude this), so we can conclude that there are no rational solutions to the second equation and no integer solution to the first one. </p> <p><strong>edit:</strong> No integer solution in <strong>positive integers</strong>. </p>
[ { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" }, { "answer_id": 2088570, "score": 13, "text": "<p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p>\n\n<p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p>\n\n<p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\\{x^2,y^2\\}=\\{m^2-n^2, 2mn\\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p>\n\n<p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\\{a^2-b^2, 2ab\\}=\\{x,n\\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p>\n\n<p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\\gcd (a,b)=1$ so the members of $\\{a,b, a^2+b^2\\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p>\n" }, { "answer_id": 2087713, "score": 4, "text": "<p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p>\n\n<p>Say $a=x^2$ and $b=y^2$.</p>\n\n<p>Then the equation becomes $a^2+b^2=z^2$.</p>\n\n<p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href=\"http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html\" rel=\"nofollow noreferrer\">this proof</a>:\n$$a=d\\cdot 2pq$$\n$$b=d\\cdot (p^2-q^2)$$\n$$z=d\\cdot (p^2+q^2)$$</p>\n\n<p>where $d=\\mathrm{gcd}(a,b,z)$ and $p,q \\in \\mathbb{N}$.</p>\n\n<p>Now $a$ and $b$ are $2$ perfect squares. </p>\n\n<p>So the problem now reduces to this:</p>\n\n<blockquote>\n <p>Prove that numbers having the form $d\\cdot 2pq$ and $d\\cdot (p^2-q^2)$ cannot be perfect squares.</p>\n</blockquote>\n\n<p>See if this helps.</p>\n\n<p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href=\"https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n\">this link</a>.</p>\n" }, { "answer_id": 2087789, "score": 2, "text": "<p>I think you equation has infinitely many integer solutions.</p>\n\n<p>For example $\\{(a,0,a^2)|a\\in\\mathbb Z\\}$.</p>\n" }, { "answer_id": 2087738, "score": 1, "text": "<p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is \"Yes\". But of course, some reader may have doubts.</p>\n" } ]
Question: Proving $x^4+y^4=z^2$ has no integer solutions Answer 1: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 2: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 3: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 4: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 5: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 6: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 7: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 8: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 9: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 10: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 11: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 12: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 13: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 14: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 15: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 16: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 17: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 18: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 19: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 20: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 21: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 22: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 23: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 24: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 25: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 26: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 27: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 28: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 29: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 30: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 31: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 32: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 33: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 34: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 35: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 36: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 37: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 38: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 39: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 40: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 41: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 42: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 43: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 44: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 45: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 46: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 47: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 48: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 49: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 50: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 51: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 52: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 53: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 54: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 55: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 56: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 57: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 58: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 59: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 60: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 61: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 62: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 63: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 64: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 65: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 66: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 67: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 68: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p> Answer 69: <p>There are no positive integer solutions to $x^4+y^4=z^2.$ By contradiction suppose $(x,y,z)$ is a positive solution with the least possible $z.$</p> <p>If a prime $p$ divides $x$ and $y$ then $p^4$ divides $z^2,$ so $p^2$ divides $z.$ But then $(x',y',z')=(x/p,y/p,z/p^2)$ is a positive solution with $z'&lt;z,$ contradicting the minimality of $z.$ </p> <p>Therefore $x,y$ are co-prime. So $(x^2,y^2,z)$ is a Fundamental Pythagorean triplet. So there exist co-prime positive integers $m,n$, not both odd, with $\{x^2,y^2\}=\{m^2-n^2, 2mn\}$ and $z=m^2+n^2.$ WLOG $x^2=m^2-n^2$ and $y^2=2mn.$ Note that $x$ is odd and $y$ is even.</p> <p>Now $x^2+n^2=m^2.$ A prime divisor of both $x$ and $n$ must divide $m,$ but $\gcd(m,n)=1.$ So $x,n$ are co-prime and $(x,n,m)$ is a Fundamental Pythagorean triplet. So there are co-prime positive integers $a,b$ with $\{a^2-b^2, 2ab\}=\{x,n\}$ and $a^2+b^2=m.$ Since $x$ is odd we have $x=a^2-b^2$ and $n=2ab.$</p> <p>Now $y^2=2mn=2(a^2+b^2)(2ab)=4(a^2+b^2)ab.$ But $\gcd (a,b)=1$ so the members of $\{a,b, a^2+b^2\}$ are pair-wise co-prime and their product $(y/2)^2$ is a square. Therefore each of $a,b,a^2+b^2$ is a square: There are positive integers $d,e,f$ with $a=d^2,b=e^2,a^2+b^2=f^2.$ Therefore $$ f^2=a^2+b^2=d^4+e^4.$$ But $f^2=a^2+b^2=m&lt;m^2+n^2=z,$ so $f\leq f^2&lt;z.$ This contradicts the minimality of $z.$</p> Answer 70: <p>The following is <em>not an answer</em>, almost like <strong>a lengthy version of a comment</strong>.</p> <p>Say $a=x^2$ and $b=y^2$.</p> <p>Then the equation becomes $a^2+b^2=z^2$.</p> <p>Now we can conclude that $a,b,z$ will of the following forms courtesy <a href="http://fermatslasttheorem.blogspot.in/2005/05/pythagorean-triples-solution.html" rel="nofollow noreferrer">this proof</a>: $$a=d\cdot 2pq$$ $$b=d\cdot (p^2-q^2)$$ $$z=d\cdot (p^2+q^2)$$</p> <p>where $d=\mathrm{gcd}(a,b,z)$ and $p,q \in \mathbb{N}$.</p> <p>Now $a$ and $b$ are $2$ perfect squares. </p> <p>So the problem now reduces to this:</p> <blockquote> <p>Prove that numbers having the form $d\cdot 2pq$ and $d\cdot (p^2-q^2)$ cannot be perfect squares.</p> </blockquote> <p>See if this helps.</p> <p><strong>P.S.</strong> If you still want some alternate method to find solutions of $a^2+b^2=z^2$, you can look up <a href="https://math.stackexchange.com/questions/93316/integer-solutions-to-x2y2-n">this link</a>.</p> Answer 71: <p>I think you equation has infinitely many integer solutions.</p> <p>For example $\{(a,0,a^2)|a\in\mathbb Z\}$.</p> Answer 72: <p>SAGE 6.7 implements an algorithm due to Peth, et al. (See A. Peth, H. G. Zimmer, J. Gebel, E. Herrmann, Computing all S-integral points on elliptic curves, Math. Proc. Camb. Philos. Soc. 127 (1999), No.3, 383-402) which is considered robust. I've had papers accepted which invoked SAGE, and it was trusted with finding all the integer points on the elliptic curves. So the answer to your question is "Yes". But of course, some reader may have doubts.</p>
How to prove that gcd(m+1, n+1) divides (mn-1)
2,089,393
9
https://math.stackexchange.com/questions/2089393/how-to-prove-that-gcdm1-n1-divides-mn-1
<p>I'm learning about divisors and the gcd, but now I'm stuck at proving:</p> <blockquote> <p>gcd(m+1, n+1) divides (mn-1) for all m,n in the set of Integers</p> </blockquote> <p>Help is appreciated on how to prove this! Thanks</p>
[ { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" }, { "answer_id": 2089423, "score": 12, "text": "<p>A common strategy to solve this type of problem is using the following simple fact: </p>\n\n<p>If $d$ divides $a$ and $b$, then d divides\n$$a\\cdot r+b\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.</p>\n\n<p>Thus if $d=\\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$,\nand so d divides\n$$(m+1)\\cdot r+(n+1)\\cdot s$$\nfor all $r,s \\in \\mathbb{Z}$.\nIn particular, for $r=n$ and $s=-1$, we have that $d$ divides\n$$(m+1)\\cdot n+(n+1)\\cdot (-1)=mn-1.$$</p>\n" }, { "answer_id": 2089404, "score": 8, "text": "<p><strong>Hint</strong> $\\ $ mod $\\,\\gcd(m\\!+\\!1,n\\!+\\!1)\\!:\\ \\ \\begin{align} &amp;m\\!+\\!1\\equiv0\\equiv n\\!+\\!1\\\\ \\Rightarrow\\ &amp;\\ \\ \\,m\\equiv -1\\equiv n\\\\ \\Rightarrow\\ &amp;mn\\equiv (-1)(-1)\\equiv 1\\end{align}$</p>\n\n<p><strong>Remark</strong> $\\ $ More generally, as above, using the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a> we deduce</p>\n\n<p>$$ P(m,n)\\equiv P(a,b)\\,\\ \\pmod{\\gcd(m\\!-\\!a,m\\!-\\!b)}$$</p>\n\n<p>for any polynomial $\\,P(x,y)\\,$ with integer coefficients, and for any integers $\\,m,n,a,b.$</p>\n\n<p>OP is special case $\\, a,b = -1\\ $ and $\\ P(m,n) = mn\\ $ (or $\\ mn-1)$</p>\n" }, { "answer_id": 2089549, "score": 5, "text": "<p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that\n$$\n\\begin{align}\n\\gcd(m+1, n+1)=d &amp; \\implies m+1=pd,n+1=qd \\\\\n &amp; \\implies m=pd-1,n=qd-1\n\\end{align}\n$$\nfor some $p,q\\in\\mathbb{Z}$. Then\n$$\n\\begin{align}\nmn-1 &amp; = (pd-1)(qd-1)-1 \\\\\n &amp; = (pqd^2-(p+q)d+1)-1 \\\\\n &amp; = (pqd-(p+q))d,\n\\end{align}\n$$\nwhich is an integer multiple of $d$.</p>\n" }, { "answer_id": 2089403, "score": 4, "text": "<p>Use the fact that </p>\n\n<p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p>\n\n<p>THen it is straightforward</p>\n" } ]
Question: How to prove that gcd(m+1, n+1) divides (mn-1) Answer 1: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 2: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 3: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 4: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 5: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 6: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 7: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 8: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 9: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 10: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 11: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 12: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 13: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 14: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 15: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 16: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 17: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 18: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 19: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 20: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 21: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 22: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 23: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 24: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 25: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 26: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 27: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 28: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 29: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 30: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 31: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 32: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 33: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 34: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 35: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 36: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 37: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 38: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 39: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 40: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 41: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 42: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 43: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 44: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 45: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 46: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 47: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 48: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 49: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 50: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 51: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 52: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 53: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 54: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 55: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 56: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 57: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 58: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 59: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 60: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 61: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 62: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 63: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 64: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 65: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 66: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 67: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 68: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p> Answer 69: <p>A common strategy to solve this type of problem is using the following simple fact: </p> <p>If $d$ divides $a$ and $b$, then d divides $$a\cdot r+b\cdot s$$ for all $r,s \in \mathbb{Z}$.</p> <p>Thus if $d=\gcd(m+1,n+1)$, then obviously $d$ divides $m+1$ and $n+1$, and so d divides $$(m+1)\cdot r+(n+1)\cdot s$$ for all $r,s \in \mathbb{Z}$. In particular, for $r=n$ and $s=-1$, we have that $d$ divides $$(m+1)\cdot n+(n+1)\cdot (-1)=mn-1.$$</p> Answer 70: <p><strong>Hint</strong> $\ $ mod $\,\gcd(m\!+\!1,n\!+\!1)\!:\ \ \begin{align} &amp;m\!+\!1\equiv0\equiv n\!+\!1\\ \Rightarrow\ &amp;\ \ \,m\equiv -1\equiv n\\ \Rightarrow\ &amp;mn\equiv (-1)(-1)\equiv 1\end{align}$</p> <p><strong>Remark</strong> $\ $ More generally, as above, using the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a> we deduce</p> <p>$$ P(m,n)\equiv P(a,b)\,\ \pmod{\gcd(m\!-\!a,m\!-\!b)}$$</p> <p>for any polynomial $\,P(x,y)\,$ with integer coefficients, and for any integers $\,m,n,a,b.$</p> <p>OP is special case $\, a,b = -1\ $ and $\ P(m,n) = mn\ $ (or $\ mn-1)$</p> Answer 71: <p>Although it's less elegant than the other approaches, you can also prove this through direct substitution. First observe that $$ \begin{align} \gcd(m+1, n+1)=d &amp; \implies m+1=pd,n+1=qd \\ &amp; \implies m=pd-1,n=qd-1 \end{align} $$ for some $p,q\in\mathbb{Z}$. Then $$ \begin{align} mn-1 &amp; = (pd-1)(qd-1)-1 \\ &amp; = (pqd^2-(p+q)d+1)-1 \\ &amp; = (pqd-(p+q))d, \end{align} $$ which is an integer multiple of $d$.</p> Answer 72: <p>Use the fact that </p> <p>$$(m+1)(n+1) = mn + m + n + 1 = (mn - 1) + (m+1) + (n+1) $$</p> <p>THen it is straightforward</p>
Prove $1+\frac13 \left(1+\frac15 \left(1+\frac17 (1+\dots ) \right) \right)=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right)$
2,090,204
9
https://math.stackexchange.com/questions/2090204/prove-1-frac13-left1-frac15-left1-frac17-1-dots-right-right-sqr
<blockquote> <p>How to prove: $$1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\left(1+\dots \right) \right) \right) \right)=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right)$$</p> </blockquote> <p>Since we can represent the series for $e$ in a pretty form:</p> <p>$$1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\left(1+\dots \right) \right) \right) \right)=e-1$$</p> <p>I've been trying to search closed forms for some similar expressions. For example, it's easy to see that:</p> <p>$$1+\frac12 \left(1+\frac{1}{4}\left(1+\frac{1}{6}\left(1+\frac{1}{8}\left(1+\dots \right) \right) \right) \right)=\sqrt{e}$$</p> <p>However, I don't know how to prove the expression in the title. Seems to be something for Ramanujan to consider.</p> <p>P.S. the closed form was given by Wolfram Alpha.</p> <hr> <p>See a general formula provided by Lucian in the comments.</p> <p>WA also provides another general formula:</p> <p>$$\sum_{n=0}^\infty \frac{1}{\prod_{k=0}^n bk+c}=e^{\frac{1}{b}} \left(\frac{1}{b} \right)^{1-\frac{c}{b}} \left( \Gamma\left(\frac{c}{b} \right) -\Gamma\left(\frac{c}{b},\frac{1}{b} \right) \right)$$</p> <p>Still don't know how to prove it, but I've noticed the emergence of the hypergeometric functions in some cases, which makes it possible to transform the expression to hypergeometric series.</p>
[ { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" }, { "answer_id": 2090670, "score": 10, "text": "<p>Using Euler's Beta function,</p>\n\n<p>$$ S = \\sum_{n\\geq 0}\\frac{1}{(2n+1)!!} = \\sum_{n\\geq 0}\\frac{2^n n!}{(2n+1)!} = \\sum_{n\\geq 0} \\frac{2^n}{n!} B(n+1,n+1) \\tag{1}$$\nleads to:\n$$ S = \\int_{0}^{1}\\sum_{n\\geq 0}\\frac{\\left(2x(1-x)\\right)^n}{n!}\\,dx = \\int_{0}^{1}\\exp\\left(2x(1-x)\\right)\\,dx\\tag{2}$$\nand the last integral is easy to convert into the $\\text{Erf}$ format through the substitution $x=\\frac{1}{2}+t$.</p>\n" }, { "answer_id": 4791737, "score": 0, "text": "<p>(<em>More of a long comment.</em>)</p>\n<p>Since we have the similar forms,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\frac12 \\left(1+\\frac{1}{3}\\left(1+\\frac{1}{4}\\left(1+\\frac{1}{5}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=e-1\\\\\n1+\\frac13 \\left(1+\\frac{1}{5}\\left(1+\\frac{1}{7}\\left(1+\\frac{1}{9}\\Big(1+\\dots \\Big) \\right) \\right) \\right) &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>then we also have the equally pretty continued fractions,</p>\n<p><span class=\"math-container\">\\begin{align}\n1+\\cfrac{2}{2+\\cfrac{3}{3+\\cfrac{4}{4+\\ddots}}} &amp;=e-1\\\\[8pt] \n1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\ddots}}} &amp;=\\sqrt{\\frac{\\pi e}{2}} \\text{erf} \\left( \\frac{1}{\\sqrt{2}} \\right)\n\\end{align}</span></p>\n<p>Pedja asked for the closed-form of the second cfrac in <a href=\"https://math.stackexchange.com/q/4327258/4781\">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p>\n<p><span class=\"math-container\">$$\\sqrt{\\frac{\\pi\\,e}{2}}\n=1+\\cfrac{2/2}{2+\\cfrac{3/2}{3+\\cfrac{4/2}{4+\\cfrac{5/2}{5+\\ddots}}}} \\; + \\; \\cfrac1{1+\\cfrac{1}{1+\\cfrac{2}{1+\\cfrac{3}{1+\\ddots}}}}$$</span></p>\n<p>as discussed in <a href=\"https://math.stackexchange.com/q/4791520/4781\">this 2023 post</a>.</p>\n" } ]
Question: Prove $1+\frac13 \left(1+\frac15 \left(1+\frac17 (1+\dots ) \right) \right)=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right)$ Answer 1: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 2: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 3: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 4: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 5: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 6: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 7: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 8: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 9: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 10: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 11: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 12: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 13: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 14: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 15: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 16: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 17: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 18: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 19: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 20: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 21: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 22: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 23: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 24: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 25: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 26: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 27: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 28: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 29: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 30: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 31: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 32: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 33: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 34: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p> Answer 35: <p>Using Euler's Beta function,</p> <p>$$ S = \sum_{n\geq 0}\frac{1}{(2n+1)!!} = \sum_{n\geq 0}\frac{2^n n!}{(2n+1)!} = \sum_{n\geq 0} \frac{2^n}{n!} B(n+1,n+1) \tag{1}$$ leads to: $$ S = \int_{0}^{1}\sum_{n\geq 0}\frac{\left(2x(1-x)\right)^n}{n!}\,dx = \int_{0}^{1}\exp\left(2x(1-x)\right)\,dx\tag{2}$$ and the last integral is easy to convert into the $\text{Erf}$ format through the substitution $x=\frac{1}{2}+t$.</p> Answer 36: <p>(<em>More of a long comment.</em>)</p> <p>Since we have the similar forms,</p> <p><span class="math-container">\begin{align} 1+\frac12 \left(1+\frac{1}{3}\left(1+\frac{1}{4}\left(1+\frac{1}{5}\Big(1+\dots \Big) \right) \right) \right) &amp;=e-1\\ 1+\frac13 \left(1+\frac{1}{5}\left(1+\frac{1}{7}\left(1+\frac{1}{9}\Big(1+\dots \Big) \right) \right) \right) &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>then we also have the equally pretty continued fractions,</p> <p><span class="math-container">\begin{align} 1+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\ddots}}} &amp;=e-1\\[8pt] 1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\ddots}}} &amp;=\sqrt{\frac{\pi e}{2}} \text{erf} \left( \frac{1}{\sqrt{2}} \right) \end{align}</span></p> <p>Pedja asked for the closed-form of the second cfrac in <a href="https://math.stackexchange.com/q/4327258/4781">his 2021 post</a>. Using this insight, we can express Ramanujan's nice identity as a sum of two cfracs,</p> <p><span class="math-container">$$\sqrt{\frac{\pi\,e}{2}} =1+\cfrac{2/2}{2+\cfrac{3/2}{3+\cfrac{4/2}{4+\cfrac{5/2}{5+\ddots}}}} \; + \; \cfrac1{1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}}$$</span></p> <p>as discussed in <a href="https://math.stackexchange.com/q/4791520/4781">this 2023 post</a>.</p>
Linear equations given by $Ax = b$ have a solution if and only if $\operatorname{rank}(A|b) = \operatorname{rank}(A)$
2,090,742
9
https://math.stackexchange.com/questions/2090742/linear-equations-given-by-ax-b-have-a-solution-if-and-only-if-operatorname
<blockquote> <p><strong>Theorem:</strong></p> <p>Given a system of linear equations <span class="math-container">$Ax = b$</span> where <span class="math-container">$A \in M_{m \times n}\left(\mathbb{R}\right)$</span>, <span class="math-container">$ x \in \mathbb{R}^{n}_\text{col} $</span>, <span class="math-container">$ b \in \mathbb{R}^{m}_\text{col}$</span></p> <p>Deduce that a solution <span class="math-container">$x$</span> exists if and only if <span class="math-container">$\operatorname{rank}\left(A|b\right) = \operatorname{rank}\left(A\right)$</span> where <span class="math-container">$A|b$</span> is the augmented coefficient matrix of this system</p> </blockquote> <p>I am having trouble proving the above theorem from my Linear Algebra course, I understand that A|b must reduce under elementary row operations to a form which is consistent but I don't understand exactly why the matrix A|b need have the same rank as A for this to happen.</p> <p>Please correct me if I am mistaken</p>
[ { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" }, { "answer_id": 2090749, "score": 14, "text": "<p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p>\n" }, { "answer_id": 2090777, "score": 2, "text": "<p>To get started,</p>\n\n<p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p>\n\n<p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p>\n\n<p>The system has a solution if and only if rank(A)=rank(M).</p>\n\n<p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p>\n" }, { "answer_id": 4491122, "score": 0, "text": "<p>Let <span class=\"math-container\">$Ax=b$</span> be the system of equations.Suppose <span class=\"math-container\">$Ax=b$</span> has a solution,then <span class=\"math-container\">$b\\in R(A)$</span> so <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)$</span>.</p>\n<p>Conversely,if <span class=\"math-container\">$a_1,a_2,...,a_n$</span> be the columns of <span class=\"math-container\">$A$</span> and if <span class=\"math-container\">$\\text{Rank}(A)=\\text{Rank}(A|b)=r$</span>,then suppose <span class=\"math-container\">$a_1,a_2,...,a_r$</span> are <span class=\"math-container\">$r$</span> linearly independent vectors among <span class=\"math-container\">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class=\"math-container\">$r$</span> linearly independent columns of <span class=\"math-container\">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class=\"math-container\">$r$</span>.So,<span class=\"math-container\">$b$</span> is a linear combination of <span class=\"math-container\">$a_1,a_2,...,a_r$</span>.Suppose <span class=\"math-container\">$c_1a_1+...+c_ra_r=b$</span> then <span class=\"math-container\">$x=\\begin{pmatrix}c_1\\\\c_2\\\\\\vdots\\\\c_r\\\\0\\\\\\vdots\\\\0\\end{pmatrix}$</span> is a solution,hence we are done.</p>\n" } ]
Question: Linear equations given by $Ax = b$ have a solution if and only if $\operatorname{rank}(A|b) = \operatorname{rank}(A)$ Answer 1: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 2: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 3: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 4: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 5: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 6: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 7: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 8: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 9: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 10: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 11: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 12: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 13: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 14: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 15: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 16: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 17: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 18: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 19: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 20: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 21: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 22: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 23: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 24: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 25: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 26: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 27: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 28: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 29: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 30: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 31: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 32: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 33: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 34: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 35: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 36: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 37: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 38: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 39: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 40: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 41: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 42: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 43: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 44: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 45: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 46: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 47: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 48: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 49: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 50: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 51: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p> Answer 52: <p>To see this note that rank of the matrix is dimension of the span of columns of the matrix. Now if $Ax=b$ has solution, then it means that some linear combination of columns of A gives us b, which implies that $b$ lies in $span(A)$ and so $rank(A|b)=rank(A)$. You can argue similarly in the reverse direction.</p> Answer 53: <p>To get started,</p> <p>If the system contains a row such that [ 0 0 0 0 | b ] with b=/=0 then the system is inconsistent and has no solution. The rank is the number of pivots matrix X has in echelon form, whereby b is the pivot in this row.</p> <p>Let M=[A ,B], the augmented matrix, where A is the original matrix. </p> <p>The system has a solution if and only if rank(A)=rank(M).</p> <p>If b is a pivot of M then the solution does not exist. Hence rank(m)>rank(a).</p> Answer 54: <p>Let <span class="math-container">$Ax=b$</span> be the system of equations.Suppose <span class="math-container">$Ax=b$</span> has a solution,then <span class="math-container">$b\in R(A)$</span> so <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)$</span>.</p> <p>Conversely,if <span class="math-container">$a_1,a_2,...,a_n$</span> be the columns of <span class="math-container">$A$</span> and if <span class="math-container">$\text{Rank}(A)=\text{Rank}(A|b)=r$</span>,then suppose <span class="math-container">$a_1,a_2,...,a_r$</span> are <span class="math-container">$r$</span> linearly independent vectors among <span class="math-container">$a_1,...,a_n$</span> and hence a basis of the column space,but they are <span class="math-container">$r$</span> linearly independent columns of <span class="math-container">$(A|b)$</span> and hence a basis of the column space of the augmented matrix as rank of augmented matrix is <span class="math-container">$r$</span>.So,<span class="math-container">$b$</span> is a linear combination of <span class="math-container">$a_1,a_2,...,a_r$</span>.Suppose <span class="math-container">$c_1a_1+...+c_ra_r=b$</span> then <span class="math-container">$x=\begin{pmatrix}c_1\\c_2\\\vdots\\c_r\\0\\\vdots\\0\end{pmatrix}$</span> is a solution,hence we are done.</p>
This function must be open if these points are isolated
2,092,954
9
https://math.stackexchange.com/questions/2092954/this-function-must-be-open-if-these-points-are-isolated
<p>Let $U\subset \mathbb R^m$ be an open and $f:U\to \mathbb R^m$ be a function of class $C^1$.</p> <p>If the points where the determinant of the Jacobian Matrix of $f$ is zero are isolated ones and $m&gt;1$, how do I prove this function must be open?</p> <p>I don't know even how to begin, I need some hints.</p>
[ { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" }, { "answer_id": 2097983, "score": 0, "text": "<p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. \nYou can start by reading <a href=\"https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping\" rel=\"nofollow noreferrer\">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is \"Differential Topology\" by Guillemin and Pollack (not on the wikipedia list). </p>\n\n<p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\\in N - f(\\partial M)$. In the case when $y$ is a regular value of $f$,\n$$\ndeg_f(y)=\\sum_{x\\in f^{-1}(y)} (\\pm 1),\n$$\nwhere $1$ is used when $df: T_xM\\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\\partial M)$.</p>\n\n<p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\\ne 0$, hence, consider an (isolated) critical point $x_0\\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\\notin f(\\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\\in B- \\{x_0\\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p>\n" }, { "answer_id": 2152192, "score": 0, "text": "<p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p>\n\n<p>It suffices to prove the question in the case of just one singular point, i.e.,\none point where the Jacobian of $f$ (denoted as $\\det f'(\\boldsymbol{x}))$, vanishes. We shall then show the following:</p>\n\n<p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball \nof radius $R$ in $\\mathbb R^m$, centered at the origin, and $f:B_R\\to\\mathbb R^m$, continuously differentiable. \nIf the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p>\n\n<p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open,\nfor every $r&lt;R$.</p>\n\n<p>We assume for simplicity that $f(0)=0$. \nIf there exists a sequence $\\{\\boldsymbol{x}_n\\}\\subset B_R\\smallsetminus\\{0\\}$,\nwith $\\boldsymbol{x}_n\\to 0$ and $f(\\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ \ncontains a term $\\boldsymbol{x}_n$ and as $\\det f'(\\boldsymbol{x}_n)\\ne 0$, then $f[B_r]$ contains \nan open ball centered at $f(\\boldsymbol{x}_n)=0$. Let's now assume that no such sequence \nexists, which implies that there exists an $\\varrho_1\\in (0,R)$, such that\n$$\nf(x)\\ne 0\\quad\\text{whenever $0&lt;|\\boldsymbol{x}|\\le \\varrho_1$}.\n$$\nWe pick an arbitrary $\\varrho\\in(0,\\varrho_1)$ and \nlet \n$$\n\\min_{\\boldsymbol{x}\\in C_\\varrho} |f(\\boldsymbol{x})|=2d,\n\\quad \\text{where $C_\\varrho=\\{\\boldsymbol{x}: |\\boldsymbol{x}|=\\varrho \\}$}.\n$$ \nWe shall show that $B_d\\subset f[B_\\varrho]$. Clearly, it suffices to show that\n$B_d\\!\\smallsetminus\\!\\{0\\}\\subset f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]$. Let\n$$\nV=(B_d\\!\\smallsetminus\\!\\{0\\})\\cap f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}]\n\\quad\\text{and}\\quad\nW=(B_d\\!\\smallsetminus\\!\\{0\\})\\!\\smallsetminus\\! f[B_\\varrho\\!\\smallsetminus\\!\\{0\\}].\n$$\nIt suffices to show that both $V$ and $W$ are open, since $B_d\\!\\smallsetminus\\!\\{0\\}$ is connected.\n(In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is\nopen as a consequence of the Inverse Function Theorem. It remains to show that $W$ is\nalso open.</p>\n\n<p>Let $\\,\\boldsymbol{w}\\in W$. \nSince $\\,\\boldsymbol{w}\\not\\in f[B_\\varrho]$, \nthen the function $\\,g:\\overline{B}_\\varrho\\to\\mathbb R^m$, defined as\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2,\n$$ \ndoes not vanish \nand hence it attains a minimum in some $\\boldsymbol{x}_0\\in\\overline{B}_\\varrho$, where \n$\\overline{B}_\\varrho$ is the closed ball of radius $\\varrho$ centered at the origin. In particular, \n$|\\boldsymbol{x}_0|&lt;\\varrho$, since for $\\boldsymbol{x}\\in C_\\varrho$ we have\n$$\ng(\\boldsymbol{x})=|f(\\boldsymbol{x})-\\boldsymbol{w}|^2&gt;\\big(|f(\\boldsymbol{x})|-|\\boldsymbol{w}|\\big)^2&gt;\n\\big(2d-d\\big)^2=d^2&gt;|\\boldsymbol{w}|^2=g(0).\n$$\nTherefore, since the continuously differentiable function\n$g$ attains a local minimum at $\\boldsymbol{x}_0$, then its gradient<br>\nvanishes at $\\boldsymbol{x}_0$\n$$\n0=\\nabla g(\\boldsymbol{x}_0)=2f'(\\boldsymbol{x}_0)\\big( f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\big),\n$$\nand since $f(\\boldsymbol{x}_0)-\\boldsymbol{w}\\ne 0$, \nthe matrix $f'(\\boldsymbol{x}_0)$ is not invertible and therefore\n$\\det f'(\\boldsymbol{x}_0)=0$.\nNow, since $\\det f'(\\boldsymbol{x})$ vanishes only for $\\boldsymbol{x}=0$, \nthen $\\boldsymbol{x}_0=0$, which\nimplies that for every $\\boldsymbol{x}\\in \\overline{B}_\\varrho$\n$$\n|f(\\boldsymbol{x})-\\boldsymbol{w}|^2=g(\\boldsymbol{x})\\ge g(0)=|\\boldsymbol{w}|^2.\n$$\nThis means that the whole open ball of radius $|\\boldsymbol{w}|$ centered \nat $\\boldsymbol{w}$ does not intersect\nthe set $f\\big[\\overline{B}_\\varrho\\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>\n" } ]
Question: This function must be open if these points are isolated Answer 1: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 2: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 3: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 4: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 5: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 6: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 7: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 8: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 9: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 10: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 11: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 12: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 13: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 14: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 15: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 16: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 17: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 18: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 19: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 20: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 21: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 22: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 23: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 24: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 25: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 26: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 27: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 28: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 29: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 30: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 31: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 32: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 33: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 34: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p> Answer 35: <p>The thing that you are missing is the notion of the <strong>degree</strong> of a map. You can start by reading <a href="https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping" rel="nofollow noreferrer">this</a> wikipedia article (you only need the section about degree of maps from a closed region), then take a look at the references they list. But the best reference I know is "Differential Topology" by Guillemin and Pollack (not on the wikipedia list). </p> <p>The setup is that you have two oriented smooth manifolds $M, N$ of dimension $m$, where $M$ possibly has boundary while $N$ has empty boundary. Suppose that $f: M\to N$ is a $C^1$-smooth proper map (proper means that preimages of compacts are compact). The degree $deg_f(y)$ is defined for $y\in N - f(\partial M)$. In the case when $y$ is a regular value of $f$, $$ deg_f(y)=\sum_{x\in f^{-1}(y)} (\pm 1), $$ where $1$ is used when $df: T_xM\to T_yN$ is orientation-preserving and $-1$ otherwise. In particular, if $f$ preserves orientation (at its regular points) and $y$, as above is a regular value, then $deg_f(y)$ is the cardinality of $f^{-1}(y)$. An important property of degree is that it is locally constant, in particular, it is constant on each component of $N -f(\partial M)$.</p> <p>Given all this, here is a proof of the claim that you are after. It suffices to consider the case when $U$ is connected. Note that the set of critical points of your map $f$ is 0-dimensional, $m&gt;1$, hence, the critical set of $f$ does not separate $U$. Therefore, without loss of generality, we may assume that $f$ has positive jacobian determinant $J_f(x)$ at its regular points. Openness is a local property. It is clear (by the inverse function theorem) at the points where $J_f(x)\ne 0$, hence, consider an (isolated) critical point $x_0\in U$ of $f$. Let $B$ be a sufficiently small closed ball centered at $x_0$, $B\subset U$, I am assuming that $x_0$ is the only critical points of $f$ in $B$. Without loss of generality (for generic choice of the radius of $B$), $y_0:=f(x_0)\notin f(\partial B)$. Now, I will take $M:=B$ and consider $deg_g$ for the restriction $g$ of $f$ to $M=B$. There is a sequence of points $x_i\in B- \{x_0\}$ converging to $x_0$. Then, since $J_g$ is positive away from $x_0$ (in $B$), $deg_g(y_i)&gt;0$, $y_i=f(x_i)$. Hence, by continuity of degree, $deg_g(y_0)&gt;0$. Now, if $y_0$ is not an interior point of $f(B)$, there is a sequence $z_j\notin f(B)$ converging to $y_0$. Hence, $deg_g(z_i)=0$ for all large $j$. Hence, $deg_g$ is discontinuous at $y_0$. A contradiction. Thus, $f$ is open at $x_0$. qed</p> Answer 36: <p>Here is an <em>elementary</em> proof, using only the Inverse Function Theorem. </p> <p>It suffices to prove the question in the case of just one singular point, i.e., one point where the Jacobian of $f$ (denoted as $\det f'(\boldsymbol{x}))$, vanishes. We shall then show the following:</p> <p><strong>Claim.</strong> <em>Let $m&gt;1$ and $B_R$ be the open ball of radius $R$ in $\mathbb R^m$, centered at the origin, and $f:B_R\to\mathbb R^m$, continuously differentiable. If the determinant of the Jacobian of $f$ vanishes only at $x=0$, then $f$ is open.</em></p> <p>In order to show that the above claim, it suffices to show that $f[B_r]$ is open, for every $r&lt;R$.</p> <p>We assume for simplicity that $f(0)=0$. If there exists a sequence $\{\boldsymbol{x}_n\}\subset B_R\smallsetminus\{0\}$, with $\boldsymbol{x}_n\to 0$ and $f(\boldsymbol{x}_n)=0$, then for every $r&lt;R$, the ball $B_r$ contains a term $\boldsymbol{x}_n$ and as $\det f'(\boldsymbol{x}_n)\ne 0$, then $f[B_r]$ contains an open ball centered at $f(\boldsymbol{x}_n)=0$. Let's now assume that no such sequence exists, which implies that there exists an $\varrho_1\in (0,R)$, such that $$ f(x)\ne 0\quad\text{whenever $0&lt;|\boldsymbol{x}|\le \varrho_1$}. $$ We pick an arbitrary $\varrho\in(0,\varrho_1)$ and let $$ \min_{\boldsymbol{x}\in C_\varrho} |f(\boldsymbol{x})|=2d, \quad \text{where $C_\varrho=\{\boldsymbol{x}: |\boldsymbol{x}|=\varrho \}$}. $$ We shall show that $B_d\subset f[B_\varrho]$. Clearly, it suffices to show that $B_d\!\smallsetminus\!\{0\}\subset f[B_\varrho\!\smallsetminus\!\{0\}]$. Let $$ V=(B_d\!\smallsetminus\!\{0\})\cap f[B_\varrho\!\smallsetminus\!\{0\}] \quad\text{and}\quad W=(B_d\!\smallsetminus\!\{0\})\!\smallsetminus\! f[B_\varrho\!\smallsetminus\!\{0\}]. $$ It suffices to show that both $V$ and $W$ are open, since $B_d\!\smallsetminus\!\{0\}$ is connected. (In fact, here is the only place where we use the requirement that $m&gt;1$.) The set $V$ is open as a consequence of the Inverse Function Theorem. It remains to show that $W$ is also open.</p> <p>Let $\,\boldsymbol{w}\in W$. Since $\,\boldsymbol{w}\not\in f[B_\varrho]$, then the function $\,g:\overline{B}_\varrho\to\mathbb R^m$, defined as $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2, $$ does not vanish and hence it attains a minimum in some $\boldsymbol{x}_0\in\overline{B}_\varrho$, where $\overline{B}_\varrho$ is the closed ball of radius $\varrho$ centered at the origin. In particular, $|\boldsymbol{x}_0|&lt;\varrho$, since for $\boldsymbol{x}\in C_\varrho$ we have $$ g(\boldsymbol{x})=|f(\boldsymbol{x})-\boldsymbol{w}|^2&gt;\big(|f(\boldsymbol{x})|-|\boldsymbol{w}|\big)^2&gt; \big(2d-d\big)^2=d^2&gt;|\boldsymbol{w}|^2=g(0). $$ Therefore, since the continuously differentiable function $g$ attains a local minimum at $\boldsymbol{x}_0$, then its gradient<br> vanishes at $\boldsymbol{x}_0$ $$ 0=\nabla g(\boldsymbol{x}_0)=2f'(\boldsymbol{x}_0)\big( f(\boldsymbol{x}_0)-\boldsymbol{w}\big), $$ and since $f(\boldsymbol{x}_0)-\boldsymbol{w}\ne 0$, the matrix $f'(\boldsymbol{x}_0)$ is not invertible and therefore $\det f'(\boldsymbol{x}_0)=0$. Now, since $\det f'(\boldsymbol{x})$ vanishes only for $\boldsymbol{x}=0$, then $\boldsymbol{x}_0=0$, which implies that for every $\boldsymbol{x}\in \overline{B}_\varrho$ $$ |f(\boldsymbol{x})-\boldsymbol{w}|^2=g(\boldsymbol{x})\ge g(0)=|\boldsymbol{w}|^2. $$ This means that the whole open ball of radius $|\boldsymbol{w}|$ centered at $\boldsymbol{w}$ does not intersect the set $f\big[\overline{B}_\varrho\big]$, and hence $W$ is open, which concludes the proof of the Claim.</p>
Cantor Theorem assumes set is non-empty?
2,093,647
9
https://math.stackexchange.com/questions/2093647/cantor-theorem-assumes-set-is-non-empty
<p>Cantor's Theorem:</p> <blockquote> <p>For any set $X$, there is no onto function $f:X\to \mathcal{P}(X)$</p> </blockquote> <p>While I was looking at the proof for this, my head decided to stop understanding it. I mean, I think I understand the proof itself, but doesn't it assume that the subset of the objects that aren't in the defined function is non-empty? Looking at it, I think that it only proves that that subset has no element. Doesn't it derive a contradiction from an assumption made earlier? Can you guys help me understand?</p>
[ { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" }, { "answer_id": 2093659, "score": 14, "text": "<p>No. The proof assumes nothing.</p>\n\n<p>If $f\\colon A\\to\\mathcal P(A)$ is <em>any</em> function, then $A_f=\\{a\\in A\\mid a\\notin f(a)\\}$ is not in the range of $f$.</p>\n\n<ul>\n<li>If $A=\\varnothing$, then the <em>only</em> function with the empty domain is $\\varnothing$ itself, which has an empty range. Then $A_f=\\varnothing$, but $\\varnothing\\notin\\operatorname{rng}(\\varnothing)$. So we're fine.</li>\n<li>If $A_f$ is empty, e.g. $f(a)=\\{a\\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li>\n</ul>\n\n<p>In either case, it's fine, since $\\varnothing$ is indeed a subset of $A$ and therefore an element of $\\mathcal P(A)$.</p>\n" }, { "answer_id": 2093656, "score": 7, "text": "<p>Assuming this is your formulation of Cantor's Theorem:</p>\n\n<blockquote>\n <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p>\n</blockquote>\n\n<p>or some equivalent formulation, let's see what happens if $S$ is empty.</p>\n\n<p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\\emptyset$ to $\\mathcal{P}(\\emptyset)$ through some work.</p>\n\n<p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>\n" } ]
Question: Cantor Theorem assumes set is non-empty? Answer 1: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 2: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 3: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 4: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 5: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 6: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 7: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 8: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 9: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 10: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 11: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 12: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 13: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 14: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 15: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 16: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 17: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 18: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 19: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 20: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 21: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 22: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 23: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 24: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 25: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 26: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 27: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 28: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 29: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 30: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 31: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 32: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 33: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 34: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p> Answer 35: <p>No. The proof assumes nothing.</p> <p>If $f\colon A\to\mathcal P(A)$ is <em>any</em> function, then $A_f=\{a\in A\mid a\notin f(a)\}$ is not in the range of $f$.</p> <ul> <li>If $A=\varnothing$, then the <em>only</em> function with the empty domain is $\varnothing$ itself, which has an empty range. Then $A_f=\varnothing$, but $\varnothing\notin\operatorname{rng}(\varnothing)$. So we're fine.</li> <li>If $A_f$ is empty, e.g. $f(a)=\{a\}$, then it just means that no element is mapped to the empty set. This is an extension of the first case, where indeed no element is mapped to the empty set.</li> </ul> <p>In either case, it's fine, since $\varnothing$ is indeed a subset of $A$ and therefore an element of $\mathcal P(A)$.</p> Answer 36: <p>Assuming this is your formulation of Cantor's Theorem:</p> <blockquote> <p>For any set $S$, there is no onto mapping from $S$ to its power set.</p> </blockquote> <p>or some equivalent formulation, let's see what happens if $S$ is empty.</p> <p>If $S$ is empty, that is, of cardinality zero, then we see that its power set contains one element - the empty set. We can see no surjection exists from $\emptyset$ to $\mathcal{P}(\emptyset)$ through some work.</p> <p>If this doesn't help, I suggest updating your question with the formulation of Cantor's Theorem you are looking at and details of the proof you see.</p>
Evaluate $\int_0^1 \frac{\ln \left(1+x+x^2+\cdots +x^{n} \right)}{x}dx$
2,094,327
9
https://math.stackexchange.com/questions/2094327/evaluate-int-01-frac-ln-left1xx2-cdots-xn-rightxdx
<p>How to evaluate $$\int_0^1 \frac{\ln \left(1+x+x^2+\cdots +x^{n} \right)}{x}dx$$ My attempt: \begin{align*} \int_0^1 \frac{\ln \left(1+x+x^2+\cdots +x^{n} \right)}{x}dx &amp;= \int_0^1 \frac{\ln \left(\dfrac{1-x^{n+1}}{1-x} \right)}{x}dx \\ &amp;= \int_0^1 \frac{\ln \left({1-x^{n+1}} \right)}{x}dx -\int_0^1 \frac{\ln \left({1-x} \right)}{x}dx \\ &amp;=\frac{1}{n+1} \int_0^1 \frac{\ln \left({1-x} \right)}{x}dx -\int_0^1 \frac{\ln \left({1-x} \right)}{x}dx \end{align*} but what's next?I have been stuck here for a while.</p>
[ { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" }, { "answer_id": 2094329, "score": 9, "text": "<p>Notice that\n$$-\\mathrm{Li}_{2}\\left ( 1 \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1-x \\right )}{x}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n}\\int_{0}^{1}x^{n-1}\\, \\mathrm{d}x=-\\sum_{n=1}^{\\infty }\\frac{1}{n^{2}}=-\\frac{\\pi ^{2}}{6}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2094330, "score": 6, "text": "<p>$$\\int_0^1 \\dfrac{\\ln(1-x)}xdx = \\int_0^1 \\left(-\\sum_{k=1}^{\\infty} \\dfrac{x^{k-1}}k\\right)dx = -\\sum_{k=1}^{\\infty} \\dfrac1k\\int_0^1 x^{k-1}dx = - \\sum_{k=1}^{\\infty} \\dfrac1{k^2} = -\\zeta(2)$$</p>\n" } ]
Question: Evaluate $\int_0^1 \frac{\ln \left(1+x+x^2+\cdots +x^{n} \right)}{x}dx$ Answer 1: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 2: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 3: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 4: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 5: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 6: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 7: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 8: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 9: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 10: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 11: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 12: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 13: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 14: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 15: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 16: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 17: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 18: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 19: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 20: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 21: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 22: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 23: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 24: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 25: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 26: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 27: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 28: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 29: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 30: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 31: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 32: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 33: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 34: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p> Answer 35: <p>Notice that $$-\mathrm{Li}_{2}\left ( 1 \right )=\int_{0}^{1}\frac{\ln\left ( 1-x \right )}{x}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n}\int_{0}^{1}x^{n-1}\, \mathrm{d}x=-\sum_{n=1}^{\infty }\frac{1}{n^{2}}=-\frac{\pi ^{2}}{6}$$ and the answer will follow.</p> Answer 36: <p>$$\int_0^1 \dfrac{\ln(1-x)}xdx = \int_0^1 \left(-\sum_{k=1}^{\infty} \dfrac{x^{k-1}}k\right)dx = -\sum_{k=1}^{\infty} \dfrac1k\int_0^1 x^{k-1}dx = - \sum_{k=1}^{\infty} \dfrac1{k^2} = -\zeta(2)$$</p>
Linear independence of a set of solutions and the Wronskian
2,094,538
9
https://math.stackexchange.com/questions/2094538/linear-independence-of-a-set-of-solutions-and-the-wronskian
<blockquote> <p>Consider a general $n$th order linear equation $$x^{n}(t)+a_{n-1}x^{n-1}(t)+ \dots + a_{1}x'(t) + a_{0}x(t)=0\tag{$*$}.$$ Let $x_1, x_2 , \dots , x_n$ be a fundamental set of solutions of above and set $W(t)=W(x_1, x_2 , \dots , x_n ; t).$ </p> </blockquote> <p><strong>Question.</strong> Show that a set of solutions $x_1 , x_2 , \dots , x_k$ of $(*)$ are linearly independent over $(-\infty, \infty)$ if and only if their Wronskian $W(x_1 , x_2 , \dots , x_k; t_0) \neq 0$ for some $t_0 \in (-\infty, \infty).$ Also show that those solutions form a vector space of dimension $n$.</p> <p><strong>My approach</strong>: Writing the equivalent first order system,<br> $$y_1=x ,~y_2=x' ,~\dots~,y_n=x^{(n-1)},$$ from which we get $$y_1'=y_2,~~y_2'=y_3,~~\dots~~,y_{n-1}'=y_n,~~y_n'=-a_{n-1}(t) y_n- \cdots - a_{1}(t) y_2-a_{0}(t) y_1.$$</p> <p>For the contrapositive statement: i.e., if $W(x_1 , x_2 , \dots , x_k; t_0) = 0,$ for some $t_0 \in (-\infty, \infty),$ doesn't that clearly implies that the set of vectors $\{ x_1 , x_2 , \dots , x_k \}$ is linearly dependent. </p> <p>I'm stuck in progressing any further. Any help in proving this is much appreciated. </p>
[ { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" }, { "answer_id": 2106925, "score": 2, "text": "<p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p>\n\n<p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that\n$$\n x^{(k)}(t_0)=a_k,\\;\\;\\; k=0,1,2,\\cdots,n-1.\n$$\n<b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p>\n\n<p><b>Theorem: [Wronskii] </b> Let $\\{ x_1,x_2,\\cdots,x_n \\}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\\{ x_1,x_2,\\cdots,x_n \\}$ is a linearly dependent set of functions on $I$.</p>\n\n<p><b>Proof:</b> If the set of functions $\\{ x_1,x_2,\\cdot,x_n \\}$ is a linearly dependent set of functions on $I$, then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\n$$\n \\alpha_1 x_1(t)+\\alpha_2 x_2(t)+\\cdots +\\alpha_n x_n(t) = 0,\\;\\;\\; t\\in I.\n$$\nBy differentiating this equation $n-1$ times, one obtains the matrix equation\n$$\n \\left[\\begin{array}{cccc}\n x_1(t) &amp; x_2(t) &amp; \\cdots &amp; x_n(t) \\\\\n x_1'(t) &amp; x_2'(t) &amp; \\cdots &amp; x_n'(t) \\\\\n \\vdots &amp; \\vdots &amp; \\ddots &amp; \\vdots \\\\\n x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \\cdots &amp; x_n^{(n-1)}(t)\n \\end{array}\\right]\n \\left[\\begin{array}{c} \\alpha_1 \\\\ \\alpha_2 \\\\ \\vdots \\\\ \\alpha_n\\end{array}\\right] = 0.\n$$\nThe existence of a non-trivial solutions $[\\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p>\n\n<p>Conversely, suppose that the Wronskian $W(x_1,x_2,\\cdots,x_n)$ vanishes at some $t \\in I$. Then there are constants $\\alpha_1,\\alpha_2,\\cdots,\\alpha_n$ such that\nthe matrix equation of the previous paragraph holds. Hence,\n$$\n x(t) = \\alpha_1 x_1(t)+\\alpha_2 x_2(t) + \\cdots + \\alpha_n x_n(t)\n$$\nis a solution of your ODE which satisfies $x(t)=x'(t)=\\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\\equiv 0$, which proves that the set of functions $\\{x_1,x_2,\\cdots,x_n\\}$ is a linear independent set of functions. $\\blacksquare$</p>\n\n<p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \\in I$. Then the map\n$$\n M : S \\rightarrow \\mathbb{R}^{n}\n$$\ndefined by\n$$\n Mx = \\left[\\begin{array}{c}x(t) \\\\ x'(t) \\\\ \\vdots \\\\ x^{(n-1)}(t)\\end{array}\\right]\n$$\nis linear. This map is injective because $Mx=0$ iff $x\\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p>\n" }, { "answer_id": 2099947, "score": 1, "text": "<p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \\in \\mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p>\n\n<p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\\dots,n-1$ and $j=1,\\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \\in \\mathbb{R}$ if and only if the columns are linearly independent.</p>\n\n<p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\\tau,\\tau+\\epsilon)$, we have </p>\n\n<p>$$\n\\bf{W}(\\tau+\\epsilon)=W(\\tau)+\\epsilon W'(\\tau)+o(\\epsilon)=(I_n+\\epsilon A(\\tau))W(\\tau)+o(\\epsilon)\n$$</p>\n\n<p>By Binet, recalling that $det(I_n+\\epsilon A)=1+\\epsilon tr(A)+O(\\epsilon^2)$ (observe that the eigenvalues of $I_n+\\epsilon A$ are $1+\\epsilon \\lambda_i$, where $\\lambda_i$ are the eigenvalues of A), you have that</p>\n\n<p>$$\nW(\\tau+\\epsilon)=det(I_n+\\epsilon tr(A(\\tau))+o(\\epsilon)\n$$\nby which it follows that</p>\n\n<p>$$\n\\frac{W(\\tau+\\epsilon)-W(\\tau)}{\\epsilon}=tr(A(\\tau))W(\\tau)+o(1)\n$$\nthat for $\\epsilon \\to 0$ satisfies the thesis.</p>\n\n<p>Liouville's theorem implies that $W(t)=W(\\tau)exp(\\int_\\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p>\n" }, { "answer_id": 2104088, "score": 1, "text": "<p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \\begin{align*}&amp;\\left(\\frac{d^n}{dt^n}+a_{n-1}\\frac{d^{n-1}}{dt^{n-1}}+\\cdots+a_0\\right)(cx_1+x_2)\\\\=&amp;c\\left(\\frac{d^nx_1}{dt^n}+a_{n-1}\\frac{d^{n-1}x_1}{dt^{n-1}}+\\cdots+a_0x_1\\right)+\\left(\\frac{d^nx_2}{dt^n}+a_{n-1}\\frac{d^{n-1}x_2}{dt^{n-1}}+\\cdots+a_0x_2\\right)\\\\=&amp;0+0\\\\=&amp;0\\end{align*}\nHence it is a vector space. Now assume $W\\neq0$ for some point $t_0$. Consider the Linear combination \n$$c_1x_1+c_2x_2+\\cdots+c_nx_n=0$$ then we have \\begin{align*}\n&amp;c_1x_1'+c_2x_2'+\\cdots+c_nx_n'=0\\\\\n&amp;c_1x_1^{''}+c_2x_2^{''}+\\cdots+c_nx_n^{''}=0\\\\\n&amp;.\\\\\n&amp;.\\\\\n&amp;c_1x_1^n+c_2x_2^n+\\cdots+c_nx_n^n=0~~~~~\\text{for all t}\n\\end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\\mathbb{0},~~~ \\text{where } C=[c_1~c_2~\\cdots c_n]^T $$ but at $t_0$, $W(t_0)\\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>\n" } ]
Question: Linear independence of a set of solutions and the Wronskian Answer 1: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 2: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 3: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 4: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 5: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 6: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 7: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 8: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 9: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 10: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 11: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 12: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 13: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 14: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 15: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 16: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 17: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 18: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 19: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 20: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 21: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 22: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 23: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 24: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 25: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 26: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 27: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 28: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 29: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 30: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 31: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 32: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 33: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 34: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 35: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 36: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 37: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 38: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 39: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 40: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 41: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 42: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 43: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 44: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 45: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 46: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 47: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 48: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 49: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 50: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 51: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p> Answer 52: <p>I'll assume all of the coefficients $a_k$ are continuous on the interval of interest $I$.</p> <p><b>Theorem: [Existence and Uniqueness]</b> Let $a_0,a_1,\cdots,a_{n-1}$ be fixed constants, and let $t_0$ be a point in the interval of interest $I$ for the ordinary differential equation stated in your problem. Then there exists a unique solution $x(t)$ defined on $I$ such that $$ x^{(k)}(t_0)=a_k,\;\;\; k=0,1,2,\cdots,n-1. $$ <b>Proof:</b> Use Picard iteration to establish existence and uniqueness.</p> <p><b>Theorem: [Wronskii] </b> Let $\{ x_1,x_2,\cdots,x_n \}$ be a set of solutions of the ODE. The Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some point $t_0$ of the interval of interest $I$ iff $\{ x_1,x_2,\cdots,x_n \}$ is a linearly dependent set of functions on $I$.</p> <p><b>Proof:</b> If the set of functions $\{ x_1,x_2,\cdot,x_n \}$ is a linearly dependent set of functions on $I$, then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that $$ \alpha_1 x_1(t)+\alpha_2 x_2(t)+\cdots +\alpha_n x_n(t) = 0,\;\;\; t\in I. $$ By differentiating this equation $n-1$ times, one obtains the matrix equation $$ \left[\begin{array}{cccc} x_1(t) &amp; x_2(t) &amp; \cdots &amp; x_n(t) \\ x_1'(t) &amp; x_2'(t) &amp; \cdots &amp; x_n'(t) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ x_1^{(n-1)}(t) &amp; x_2^{(n-1)}(t) &amp; \cdots &amp; x_n^{(n-1)}(t) \end{array}\right] \left[\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{array}\right] = 0. $$ The existence of a non-trivial solutions $[\alpha_j]$ forces the determinant of the coefficient matrix--which is the Wronskian--to vanish for all $t$. Therefore the Wronskian of the solutions $x_1,x_2,\cdots,x_n$ vanishes identically if the solutions are linearly dependent.</p> <p>Conversely, suppose that the Wronskian $W(x_1,x_2,\cdots,x_n)$ vanishes at some $t \in I$. Then there are constants $\alpha_1,\alpha_2,\cdots,\alpha_n$ such that the matrix equation of the previous paragraph holds. Hence, $$ x(t) = \alpha_1 x_1(t)+\alpha_2 x_2(t) + \cdots + \alpha_n x_n(t) $$ is a solution of your ODE which satisfies $x(t)=x'(t)=\cdots=x^{(n-1)}(t)=0$. By uniqueness of solutions, $x\equiv 0$, which proves that the set of functions $\{x_1,x_2,\cdots,x_n\}$ is a linear independent set of functions. $\blacksquare$</p> <p>To see that the set of solutions is an n-dimensional vector space, let $S$ be the set of solutions on the interval $I$, and let $t \in I$. Then the map $$ M : S \rightarrow \mathbb{R}^{n} $$ defined by $$ Mx = \left[\begin{array}{c}x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{array}\right] $$ is linear. This map is injective because $Mx=0$ iff $x\equiv 0$ by uniqueness of solutions. The map is surjective because of the existence of solutions. So $S$ is $n$-dimensional because $M$ is a linear isomorphism.</p> Answer 53: <p>Only a remark for the contrapositive: the fact that the Wronskian is zero is a necessary but not sufficient condition for the linear dependence of the functions considered. As an example, take $x_1(t)=t^2$ and $x_2(t)=t|t|$. Then, for any $t \in \mathbb{R}$ the Wronskian $W(x_1,x_2;t)=0$, but $x_1(t)$ and $X_2(t)$ are linearly independent. As a matter of fact, consider the equation $c_1t^2+c_2t|t|=0$ and note what happens if you take $t=1$ and then $t=-1$. This happens since the previous functions $x_1(t)$ and $x_2(t)$ are solutions of no differential equation.</p> <p>In order to achieve the proof you need, you have to observe that if the fundamental solutions of a $n$th order linear equation are linearly dependent, then also $x_1'(t),\dots,x_n'(t)$ are. To show it, take the linear combination $c_1x_1(t)+\dots+c_nx_n(t)=0$ and derive it. The converse is obvious. Thus, if you take the vectors $(x_j^(i)(t))$, where $i$ is the derivation order ($i=0,\dots,n-1$ and $j=1,\dots,n$ is the index of the solutions), and put them into a matrix, the wronskian. You conclude that the wronskian is non-zero for any $t \in \mathbb{R}$ if and only if the columns are linearly independent.</p> <p>Now, we can finish the proof recalling Liouville's Theorem: the wronskian of the solutions of a system of linear differential equations $x'=A(t)x$ satisfies the differential equation $\bf{W}'=tr(A(t))W$ (in your case, by defining $y_i$ as you did, you obtain a particular form for the matrix A). This is true, since for any interval $(\tau,\tau+\epsilon)$, we have </p> <p>$$ \bf{W}(\tau+\epsilon)=W(\tau)+\epsilon W'(\tau)+o(\epsilon)=(I_n+\epsilon A(\tau))W(\tau)+o(\epsilon) $$</p> <p>By Binet, recalling that $det(I_n+\epsilon A)=1+\epsilon tr(A)+O(\epsilon^2)$ (observe that the eigenvalues of $I_n+\epsilon A$ are $1+\epsilon \lambda_i$, where $\lambda_i$ are the eigenvalues of A), you have that</p> <p>$$ W(\tau+\epsilon)=det(I_n+\epsilon tr(A(\tau))+o(\epsilon) $$ by which it follows that</p> <p>$$ \frac{W(\tau+\epsilon)-W(\tau)}{\epsilon}=tr(A(\tau))W(\tau)+o(1) $$ that for $\epsilon \to 0$ satisfies the thesis.</p> <p>Liouville's theorem implies that $W(t)=W(\tau)exp(\int_\tau^t tr(A(s)) ds)$, so the Wronskian could be identically zero or non-zero everywhere. This concludes the proof.</p> Answer 54: <p>First let us prove that it is a vector space. Since for two solutions $x_1$ and $x_2$ of the above equation we have \begin{align*}&amp;\left(\frac{d^n}{dt^n}+a_{n-1}\frac{d^{n-1}}{dt^{n-1}}+\cdots+a_0\right)(cx_1+x_2)\\=&amp;c\left(\frac{d^nx_1}{dt^n}+a_{n-1}\frac{d^{n-1}x_1}{dt^{n-1}}+\cdots+a_0x_1\right)+\left(\frac{d^nx_2}{dt^n}+a_{n-1}\frac{d^{n-1}x_2}{dt^{n-1}}+\cdots+a_0x_2\right)\\=&amp;0+0\\=&amp;0\end{align*} Hence it is a vector space. Now assume $W\neq0$ for some point $t_0$. Consider the Linear combination $$c_1x_1+c_2x_2+\cdots+c_nx_n=0$$ then we have \begin{align*} &amp;c_1x_1'+c_2x_2'+\cdots+c_nx_n'=0\\ &amp;c_1x_1^{''}+c_2x_2^{''}+\cdots+c_nx_n^{''}=0\\ &amp;.\\ &amp;.\\ &amp;c_1x_1^n+c_2x_2^n+\cdots+c_nx_n^n=0~~~~~\text{for all t} \end{align*}Also at $t_0$ this equation holds and in matrix form we can write this as $$W(t_0)C=\mathbb{0},~~~ \text{where } C=[c_1~c_2~\cdots c_n]^T $$ but at $t_0$, $W(t_0)\neq0$ and hence $C=0.$ Which shows that ${x_1,x_2\cdots,x_n}$ are LI and since there can at most n LI solutions so the dimension of this vector space is n. </p>
Derivative of Function with Cases: $f(x)=x^2\sin x^{-1}$ for $x\ne0$
2,096,965
9
https://math.stackexchange.com/questions/2096965/derivative-of-function-with-cases-fx-x2-sin-x-1-for-x-ne0
<p>So far I assumed the derivative of a function with cases such as this one:</p> <p>$$f(x) = \begin{cases}x^2\sin x^{-1} &amp; \text{ if } x \neq 0\\ 0 &amp; \text{ else }\end{cases}$$ would be the cases of the derivatives.</p> <p>So, for $f'$ I would get:</p> <p>$$f'(x) = \begin{cases} 2x \sin{\left( \frac{1}{x} \right)} -\cos{\left( \frac{1}{x} \right)} &amp; \text{if } x \neq 0\\ 0 &amp; \text{else} \end{cases}$$</p> <p>And for $f''$</p> <p>$$f''(x) = \begin{cases} \frac{(2x^2 - 1)\sin{\left( \frac{1}{x} \right)} - 2x\cos{\left( \frac{1}{x} \right)}}{x^2} &amp; \text{if } x \neq 0\\ 0 &amp; \text{else} \end{cases}$$</p> <p>However, supposedly $f$ is only differentiable once and not twice. Therefore I must have made a mistake here, but I am at a loss as to what that would be. Is the derivative of cases not the cases of the derivatives?</p>
[ { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" }, { "answer_id": 2096967, "score": 8, "text": "<p>\"The derivative of the cases is the cases of the derivatives\", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p>\n\n<p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need \"a little space around\" a point in which only a single case applies to find it.</p>\n" }, { "answer_id": 2096966, "score": 5, "text": "<p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that\n$$\nf'(0) = \\lim_{h\\to0} \\frac{ f(0+h) - f(0) } h.\n$$\nYou'll probably need to squeeze in order to find the limit. Next you have\n$$\nf''(0) = \\lim_{h\\to0} \\frac{f'(0+h) - f'(0)} h.\n$$\nAnd again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>\n" } ]
Question: Derivative of Function with Cases: $f(x)=x^2\sin x^{-1}$ for $x\ne0$ Answer 1: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 2: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 3: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 4: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 5: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 6: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 7: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 8: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 9: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 10: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 11: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 12: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 13: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 14: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 15: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 16: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 17: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 18: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 19: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 20: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 21: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 22: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 23: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 24: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 25: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 26: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 27: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 28: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 29: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 30: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 31: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 32: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 33: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 34: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p> Answer 35: <p>"The derivative of the cases is the cases of the derivatives", as you write, on any point <em>internal</em> to an interval on which a single case applies. But at any point where two intervals relative to two different cases meet the derivative might not even exist, even if it exists at any point arbitrarily near to it (and even if it approaches the same limit from the right and the left, in fact). Think about $g(x)$ defined as $1/x$ for $x\neq 0$ and as $0$ for $x=0$: is there a derivative in $0$?</p> <p>To make this all more formal, check the definition of derivative as a limit, and note that in general you really need "a little space around" a point in which only a single case applies to find it.</p> Answer 36: <p>How you concluded $f'(0)=0$ you made no attempt to explain. Remember that $$ f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h. $$ You'll probably need to squeeze in order to find the limit. Next you have $$ f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h. $$ And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$</p>
Is there any deep reason that special trig angles are a sequence of roots of integers?
2,097,685
9
https://math.stackexchange.com/questions/2097685/is-there-any-deep-reason-that-special-trig-angles-are-a-sequence-of-roots-of-int
<p>We all are familiar with certain special values of sin and cos, e.g. $\sin(30^\circ)=0.5$, $\sin(45^\circ)=\frac{1}{\sqrt{2}}$ etc. In high school I found these values difficult to remember until I noticed they could be reformulated as this pattern:</p> <p>$$\sin(0^\circ)=\frac{\sqrt{0}}{2}$$</p> <p>$$\sin(30^\circ)=\frac{\sqrt{1}}{2}$$</p> <p>$$\sin(45^\circ)=\frac{\sqrt{2}}{2}$$</p> <p>$$\sin(60^\circ)=\frac{\sqrt{3}}{2}$$</p> <p>$$\sin(90^\circ)=\frac{\sqrt{4}}{2}$$</p> <p>This has been bugging me for years ever since. I know to expect there to be some deep cause of patterns when I see them, but I have no idea what causes these common special angle values to be half-roots-of-integers, or whether this pattern is just a special case of a more general notion of special trig angles.</p> <p>What's the explanation? Why should this be so?</p>
[ { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" }, { "answer_id": 2098149, "score": 1, "text": "<p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p>\n\n<p>Since $\\sin (90^{\\circ}-\\theta)=\\sqrt{1-\\sin^{2} \\theta}$,</p>\n\n<p>$$\\sin 0^{\\circ}=\\frac{\\sqrt{0}}{2} \\iff \\sin 90^{\\circ}=\\frac{\\sqrt{4}}{2}$$</p>\n\n<p>$$\\sin 30^{\\circ}=\\frac{\\sqrt{1}}{2} \\iff \\sin 60^{\\circ}=\\frac{\\sqrt{3}}{2}$$</p>\n\n<p>We might invent a mnemonic in this way:</p>\n\n<p>$$\\sin 35.26^{\\circ}=\\dfrac{\\sqrt{1}}{\\sqrt{3}}$$</p>\n\n<p>$$\\sin 54.74^{\\circ}=\\dfrac{\\sqrt{2}}{\\sqrt{3}}$$</p>\n\n<p>These angles relate to tetrahedron and are special angles in another way.</p>\n\n<p>Comparing the $\\color{green}{\\textbf{fitting curve}}$ with $\\color{red}{\\boldsymbol{\\sin \\dfrac{\\pi x}{12}}}$ below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Hyclf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Hyclf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2102086, "score": 1, "text": "<p>Unsatisfying explanation: $\\sin: [0,\\pi/2]\\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\\sin(a)=b$ for $b\\in[0,1]$ and if you pick $b$ to be a sequence like $\\frac{\\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\\sin^2(x)+\\sin^2(\\pi/2-x)=1$.</p>\n\n<p>To me, the real gem here is that rational multiples of $\\pi$ give algebraic numbers under $\\sin$ and $\\cos$ (and therefore under $\\tan,\\sec,\\csc,\\cot$ where appropriately defined). Here's why: </p>\n\n<p>Suppose $\\theta$ is a rational multiple of $\\pi$. We can write it as a rational multiple of $2\\pi$: $\\theta=\\frac{2p\\pi}{q}$. By De Moivre's formula, we have that $(\\cos\\theta+i\\sin\\theta)^q=\\cos(q\\theta)+i\\sin(q\\theta)=1$, so $\\cos\\theta+i\\sin\\theta$ is algebraic, and so each of $\\cos\\theta$ and $\\sin\\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\\frac{2a}{2}$ and $b=\\frac{a+bi-a}{i}$ are algebraic).</p>\n" }, { "answer_id": 2102184, "score": 0, "text": "<p>Let's make a table.</p>\n\n<p>$$\\begin{array}{ccccc|l}\\frac{\\sqrt 0}{2}&amp;\\frac{\\sqrt 1}{2}&amp;\\frac{\\sqrt 2}{2}&amp;\\frac{\\sqrt 3}{2}&amp;\\frac{\\sqrt 4}{2}&amp;\\text{sin}\\\\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\\text{angle (deg)}\n\\\\0&amp;2&amp;3&amp;4&amp;6&amp;\\text{multiple of 15}\\\\\n1&amp;0&amp;1&amp;0&amp;1&amp;\\text{divisible by 45?}\\\\\n1&amp;1&amp;0&amp;1&amp;1&amp;\\text{divisible by 30?}\\\\\\end{array}$$</p>\n\n<p>By periodicity this is a combination of the cyclic groups $\\frac{360}{45} = 8$ and $\\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if starting from 0, subset $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 1}{2},\\frac{\\sqrt 3}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 30 degrees and $\\pm\\left\\{\\frac{\\sqrt 0}{2},\\frac{\\sqrt 2}{2},\\frac{\\sqrt 4}{2}\\right\\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>\n" } ]
Question: Is there any deep reason that special trig angles are a sequence of roots of integers? Answer 1: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 2: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 3: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 4: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 5: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 6: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 7: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 8: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 9: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 10: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 11: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 12: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 13: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 14: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 15: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 16: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 17: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 18: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 19: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 20: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 21: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 22: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 23: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 24: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 25: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 26: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 27: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 28: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 29: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 30: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 31: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 32: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 33: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 34: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 35: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 36: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 37: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 38: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 39: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 40: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 41: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 42: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 43: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 44: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 45: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 46: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 47: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 48: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 49: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 50: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 51: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p> Answer 52: <p>The intervals between the angles are not uniform. Somehow it's mathematical coincidence:</p> <p>Since $\sin (90^{\circ}-\theta)=\sqrt{1-\sin^{2} \theta}$,</p> <p>$$\sin 0^{\circ}=\frac{\sqrt{0}}{2} \iff \sin 90^{\circ}=\frac{\sqrt{4}}{2}$$</p> <p>$$\sin 30^{\circ}=\frac{\sqrt{1}}{2} \iff \sin 60^{\circ}=\frac{\sqrt{3}}{2}$$</p> <p>We might invent a mnemonic in this way:</p> <p>$$\sin 35.26^{\circ}=\dfrac{\sqrt{1}}{\sqrt{3}}$$</p> <p>$$\sin 54.74^{\circ}=\dfrac{\sqrt{2}}{\sqrt{3}}$$</p> <p>These angles relate to tetrahedron and are special angles in another way.</p> <p>Comparing the $\color{green}{\textbf{fitting curve}}$ with $\color{red}{\boldsymbol{\sin \dfrac{\pi x}{12}}}$ below:</p> <p><a href="https://i.stack.imgur.com/Hyclf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyclf.jpg" alt="enter image description here"></a></p> Answer 53: <p>Unsatisfying explanation: $\sin: [0,\pi/2]\to [0,1]$ is a monotonically increasing homeomorphism, so you'll always be able to find values which solve $\sin(a)=b$ for $b\in[0,1]$ and if you pick $b$ to be a sequence like $\frac{\sqrt{k}}{n}$ where $k$ runs from $0$ to $n^2$, you'll see the same sort of mirrored progression in the angles, as $\sin^2(x)+\sin^2(\pi/2-x)=1$.</p> <p>To me, the real gem here is that rational multiples of $\pi$ give algebraic numbers under $\sin$ and $\cos$ (and therefore under $\tan,\sec,\csc,\cot$ where appropriately defined). Here's why: </p> <p>Suppose $\theta$ is a rational multiple of $\pi$. We can write it as a rational multiple of $2\pi$: $\theta=\frac{2p\pi}{q}$. By De Moivre's formula, we have that $(\cos\theta+i\sin\theta)^q=\cos(q\theta)+i\sin(q\theta)=1$, so $\cos\theta+i\sin\theta$ is algebraic, and so each of $\cos\theta$ and $\sin\theta$ are as well (if $a+bi$ is algebraic, then $a-bi$ is also algebraic, as algebraicness is preserved under field automorphisms, and so $(a+bi)+(a-bi)=2a$ is algebraic as well, and therefore since algebraic numbers form a field, we have that $a=\frac{2a}{2}$ and $b=\frac{a+bi-a}{i}$ are algebraic).</p> Answer 54: <p>Let's make a table.</p> <p>$$\begin{array}{ccccc|l}\frac{\sqrt 0}{2}&amp;\frac{\sqrt 1}{2}&amp;\frac{\sqrt 2}{2}&amp;\frac{\sqrt 3}{2}&amp;\frac{\sqrt 4}{2}&amp;\text{sin}\\\hline0&amp;30&amp;45&amp;60&amp;90&amp;\text{angle (deg)} \\0&amp;2&amp;3&amp;4&amp;6&amp;\text{multiple of 15}\\ 1&amp;0&amp;1&amp;0&amp;1&amp;\text{divisible by 45?}\\ 1&amp;1&amp;0&amp;1&amp;1&amp;\text{divisible by 30?}\\\end{array}$$</p> <p>By periodicity this is a combination of the cyclic groups $\frac{360}{45} = 8$ and $\frac{360}{30} = 12$ considering rotation as the generator. This means we will stay in the set $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 2}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if starting from 0, subset $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 1}{2},\frac{\sqrt 3}{2},\frac{\sqrt 4}{2}\right\}$ if 30 degrees and $\pm\left\{\frac{\sqrt 0}{2},\frac{\sqrt 2}{2},\frac{\sqrt 4}{2}\right\}$ if 45 degrees. Number theoretic every integer times 15 deg which is a multiple of 2 <strong>or</strong> 3. And with a rotation of 15 degrees we can get a rather peculiar overlap.</p>
Show that the extension of $f : S → \mathbb{R}$ to the closure of its domain, $\overline{S}$, is unique
2,099,324
9
https://math.stackexchange.com/questions/2099324/show-that-the-extension-of-f-s-%e2%86%92-mathbbr-to-the-closure-of-its-domain
<p>I trying to solve the following question</p> <blockquote> <p>Assume that $f : S → \mathbb{R}$ is a uniformly continuous function defined on a subset $S$ of a metric space $M$. <br> 1. Prove that $f$ extends to a uniformly continuous function $f : \overline{S} → \mathbb{R}$. <br> 2. Prove that $\overline{f}$ is the unique continuous extension of $f$ to a function defined on $\overline{S}$. <br> 3. Prove the same things when $\mathbb{R}$ is replaced with a complete metric space $N$.</p> </blockquote> <p>For the first question, I show that for some sequence $x_n \in S$ that converges to $\hat{x} \in \overline{S},$ $\overline{f}(x_n)$ converges to $\overline{f}(\hat{x})$ since uniformly continuous functions take Cauchy sequences to Cauchy sequences and since $\mathbb{R}$ is complete. However, I feel like while this shows that $\overline{f}(x_n)$ converges, it does not fully establish that $\overline{f}(x_n)$ converges to $\overline{f}(\hat{x})$. <br> Is this proof sufficient to show that $f$ extends to a uniformly continuous function $f : \overline{S} → \mathbb{R}$? <br> <br> For number 3, the same proof applies since I used the completeness of $\mathbb{R}$ as the basis for my proof.<br><br> However, I don't understand how I would show that $\overline{f}$ is the unique extension of $f$. Is it sufficient to say that since all sequences in $S$ must converge to their unique limits in $\overline{S}$, $\overline{f}$ must also be unique?</p>
[ { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" }, { "answer_id": 2099358, "score": 7, "text": "<p>To define $\\bar f(\\hat x)$, pick any sequence $x_n\\to \\hat x$ and let $\\bar f(\\hat x)=\\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p>\n\n<p>This $\\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\\to\\hat x$ and $y_n\\to \\hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\\lim f(x_n)=\\lim f(y_n)$, as was to be shown.</p>\n\n<p>Also, $\\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\\hat x$ for $\\hat x\\in S$.</p>\n\n<p>Finally, $\\hat f$ is uniformly continuous. Indeed, let $\\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\\delta&gt;0$ such that $d(x,y)&lt;\\delta$ implies $|f(x)-f(y)|&lt;\\frac13\\epsilon$. Then for $\\hat x,\\hat y\\in \\bar S$ with $d(\\hat x,\\hat y)&lt;\\frac 13\\delta$, and accordingly sequences $x_n\\to x$, $y_n\\to y$, we have $|f(\\hat x)-f(x_n)|&lt;\\frac13\\epsilon$ and also $d(x_n\\hat x)&lt;\\frac 13\\delta$, if only $n$ is big enough. Likewise, we have $|f(\\hat y)-f(y_n)|&lt;\\frac13\\epsilon$ and $d(\\hat y,y_n)&lt;\\frac13\\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\\hat x)+d((\\hat x,\\hat y)+d(\\hat y,y_n)&lt;\\delta$, hence $|f(x_n)-f(y-n)|&lt;\\frac13\\epsilon$, hence $|f(\\hat x)-f(\\hat y)|\\le |f(\\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\\hat y)|&lt;\\epsilon$, as desired.</p>\n\n<p>Thus we have found a uniformly continuous extension of $f$ to all of $\\bar S$.</p>\n\n<hr>\n\n<p>Looking at the definition, we must have $\\bar f(\\hat x)=\\lim_{n\\to\\infty}f(x_n)$ whenever $x_n\\to\\hat x$, or else $\\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\\ni x_n\\to\\hat x$ exists for all $\\hat x\\in \\bar S$, ew do not have any choice at all, i.e., our $\\bar f$ above is the unique continuous extension.</p>\n" }, { "answer_id": 2099334, "score": 1, "text": "<p>First, you need to define $\\overline{f}(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ in some way. </p>\n\n<p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\\hat{x}$. </p>\n\n<p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p>\n\n<p>This then more-or-less shows the uniqueness as the value of $g(\\hat{x})$ for $\\hat{x}$ in $\\overline{S}\\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \\to x$. </p>\n\n<p>What you still need to show is that the function you get is in fact uniformly continuous. </p>\n" }, { "answer_id": 2099356, "score": 0, "text": "<p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p>\n\n<p>Suppose $f:S\\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\\overline S:$</p>\n\n<p>Suppose $\\overline f$ and $\\overline g$ are two different extensions of $f$. This means that we may choose $x\\in \\overline S$ such that $\\overline f(x)\\neq \\overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\\overline f(x)$ and $\\overline g(x),$ respectively. Continuity of $\\overline f$ and $\\overline g$ now gives us a neighborhood $V$ of $x$ such that $\\overline f(V)\\subseteq U$ and $\\overline g(V)\\subseteq U'.$ Now observe that because $x\\in \\overline S, V$ must contain a point $p$ from $S$. But then $\\overline f(p)\\in U$ and $\\overline g(p)\\in U', $ so $\\overline f(p)\\neq \\overline g(p), $ which is a contradiction, since $\\overline f$ and $\\overline g$ agree on $S$ (they are both extensions of $f$).</p>\n" }, { "answer_id": 2099363, "score": 0, "text": "<p>For $x\\in \\overline S$, there exists a sequence $(x_n) \\subset S$ converging to $x$. Define $\\overline f$ to be $f$ on $S$ and $\\lim f(x_n)$ when $x \\notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\\overline f$ is well-defined, you need to prove that if $x_n \\to x$ and $y_n \\to x$, then $\\lim f(x_n) = \\lim f(y_n)$. Since $d(x_n,y_n) \\to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \\to 0$, so we are done.</p>\n\n<p>Now let $\\epsilon &gt;0$. By uniform continuity of $f$, there is $\\delta &gt;0$ such that for $a,b \\in S$, $d(a,b) &lt; 2\\delta \\implies d(f(a),f(b))&lt;\\epsilon/2$. For $x,y \\in \\overline S$, suppose that $d(x,y) &lt; \\delta$. There are $(x_n),(y_n) \\subset S$ converging to each, and we have $d(x_n,y_n) \\to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \\delta$, hence $d(x_n,y_n) &lt; 2\\delta$, for all $n\\ge n_0$, so $d(f(x_n),f(y_n))&lt;\\epsilon/2$, for all $n\\ge n_0$. By continuity, this gives $d(f(x),f(y)) \\le \\epsilon/2 &lt; \\epsilon$, i.e. $d(\\overline f(x), \\overline f(y)) &lt; \\epsilon$. </p>\n\n<p>Now if $\\hat f$ is another extension, then for $x\\in \\overline S$, choose a sequence $(x_n) \\subset S$ converging to $x$. We have $\\overline f(x) =\\lim f(x_n) = \\lim \\hat f(x_n) = \\hat f(\\lim x_n) = \\hat f(x)$.</p>\n" } ]
Question: Show that the extension of $f : S → \mathbb{R}$ to the closure of its domain, $\overline{S}$, is unique Answer 1: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 2: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 3: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 4: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 5: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 6: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 7: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 8: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 9: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 10: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 11: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 12: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 13: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 14: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 15: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 16: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 17: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 18: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 19: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 20: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 21: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 22: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 23: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 24: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 25: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 26: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 27: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 28: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 29: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 30: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 31: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 32: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 33: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 34: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 35: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 36: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 37: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 38: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 39: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 40: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 41: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 42: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 43: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 44: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 45: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 46: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 47: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 48: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 49: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 50: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 51: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 52: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 53: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 54: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 55: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 56: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 57: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 58: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 59: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 60: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 61: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 62: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 63: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 64: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 65: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 66: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 67: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 68: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p> Answer 69: <p>To define $\bar f(\hat x)$, pick any sequence $x_n\to \hat x$ and let $\bar f(\hat x)=\lim f(x_n)$. As you said, this limit exists because $f$ is uniformly continuous. </p> <p>This $\bar f$ is well-defined, i.e., does not on the particular choice of sequence(s); this is simply because the if we combine tqo sequences $x_n\to\hat x$ and $y_n\to \hat x$, we get another convergent sequence. The function values of this converge, but as $f(x_n)$ and $f(y_n)$ are subsequences of it, we conclude that that $\lim f(x_n)=\lim f(y_n)$, as was to be shown.</p> <p>Also, $\bar f|_S=f$. This is clear because we can pick the constant sequence $x_n=\hat x$ for $\hat x\in S$.</p> <p>Finally, $\hat f$ is uniformly continuous. Indeed, let $\epsilon&gt;0$ be given. As $f$ is uniformly continuous, there exists $\delta&gt;0$ such that $d(x,y)&lt;\delta$ implies $|f(x)-f(y)|&lt;\frac13\epsilon$. Then for $\hat x,\hat y\in \bar S$ with $d(\hat x,\hat y)&lt;\frac 13\delta$, and accordingly sequences $x_n\to x$, $y_n\to y$, we have $|f(\hat x)-f(x_n)|&lt;\frac13\epsilon$ and also $d(x_n\hat x)&lt;\frac 13\delta$, if only $n$ is big enough. Likewise, we have $|f(\hat y)-f(y_n)|&lt;\frac13\epsilon$ and $d(\hat y,y_n)&lt;\frac13\delta$ for all sufficiently large $n$. Then for these $n$, we have $d(x_n,y_n)le d(x_n,\hat x)+d((\hat x,\hat y)+d(\hat y,y_n)&lt;\delta$, hence $|f(x_n)-f(y-n)|&lt;\frac13\epsilon$, hence $|f(\hat x)-f(\hat y)|\le |f(\hat x)-f(x_n)|+|f(x_n)-f(y-n)|+|f(y_n)-f(\hat y)|&lt;\epsilon$, as desired.</p> <p>Thus we have found a uniformly continuous extension of $f$ to all of $\bar S$.</p> <hr> <p>Looking at the definition, we must have $\bar f(\hat x)=\lim_{n\to\infty}f(x_n)$ whenever $x_n\to\hat x$, or else $\hat f$ cannot be a continuous extension of $f$. As such a sequence $S\ni x_n\to\hat x$ exists for all $\hat x\in \bar S$, ew do not have any choice at all, i.e., our $\bar f$ above is the unique continuous extension.</p> Answer 70: <p>First, you need to define $\overline{f}(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ in some way. </p> <p>You can <em>define</em> it as the limit you considered for some fixed sequence that tends to $\hat{x}$. </p> <p>You can then proceed to show that it is independent/the same for any such sequence. To do this, for example, consider the sequence interleaving the two sequences.</p> <p>This then more-or-less shows the uniqueness as the value of $g(\hat{x})$ for $\hat{x}$ in $\overline{S}\setminus S$ for every continuous extension must be the limit of $f(x_n)$ for a sequence $x_n \to x$. </p> <p>What you still need to show is that the function you get is in fact uniformly continuous. </p> Answer 71: <p>Assuming you have provided the continuous extension, this may be one of those cases where, for uniqueness, the general result gives a clearer idea of what's going on and in fact, is not hard to prove. </p> <p>Suppose $f:S\to Y$ is continuous and that $Y$ is Hausdorff. Then, there is a most one continuous extension of $f$ to $\overline S:$</p> <p>Suppose $\overline f$ and $\overline g$ are two different extensions of $f$. This means that we may choose $x\in \overline S$ such that $\overline f(x)\neq \overline g(x).$ Since $Y$ is Hausdorff, we may also choose disjoint neighborhoods, $U$ and $U', $ of $\overline f(x)$ and $\overline g(x),$ respectively. Continuity of $\overline f$ and $\overline g$ now gives us a neighborhood $V$ of $x$ such that $\overline f(V)\subseteq U$ and $\overline g(V)\subseteq U'.$ Now observe that because $x\in \overline S, V$ must contain a point $p$ from $S$. But then $\overline f(p)\in U$ and $\overline g(p)\in U', $ so $\overline f(p)\neq \overline g(p), $ which is a contradiction, since $\overline f$ and $\overline g$ agree on $S$ (they are both extensions of $f$).</p> Answer 72: <p>For $x\in \overline S$, there exists a sequence $(x_n) \subset S$ converging to $x$. Define $\overline f$ to be $f$ on $S$ and $\lim f(x_n)$ when $x \notin S$, where $(x_n)$ is any sequence converging to $x$. To show that $\overline f$ is well-defined, you need to prove that if $x_n \to x$ and $y_n \to x$, then $\lim f(x_n) = \lim f(y_n)$. Since $d(x_n,y_n) \to 0$, uniform continuity tells us that $d(f(x_n),f(y_n)) \to 0$, so we are done.</p> <p>Now let $\epsilon &gt;0$. By uniform continuity of $f$, there is $\delta &gt;0$ such that for $a,b \in S$, $d(a,b) &lt; 2\delta \implies d(f(a),f(b))&lt;\epsilon/2$. For $x,y \in \overline S$, suppose that $d(x,y) &lt; \delta$. There are $(x_n),(y_n) \subset S$ converging to each, and we have $d(x_n,y_n) \to d(x,y)$. There exists $n_0$ such that $|d(x_n,y_n) - d(x,y)| &lt; \delta$, hence $d(x_n,y_n) &lt; 2\delta$, for all $n\ge n_0$, so $d(f(x_n),f(y_n))&lt;\epsilon/2$, for all $n\ge n_0$. By continuity, this gives $d(f(x),f(y)) \le \epsilon/2 &lt; \epsilon$, i.e. $d(\overline f(x), \overline f(y)) &lt; \epsilon$. </p> <p>Now if $\hat f$ is another extension, then for $x\in \overline S$, choose a sequence $(x_n) \subset S$ converging to $x$. We have $\overline f(x) =\lim f(x_n) = \lim \hat f(x_n) = \hat f(\lim x_n) = \hat f(x)$.</p>
conservation of dot product with parallel transport
2,099,505
9
https://math.stackexchange.com/questions/2099505/conservation-of-dot-product-with-parallel-transport
<p>I have a question about the parallel transport of a vector :</p> <p>Why does one say that parallel transport preserves the value of dot product (scalar product) between the transported vector and the tangent vector ?</p> <p>Is it due to the fact that angle between the tangent vector and transported vector is always the same during the operation of transport (which is the definition of parallel transport) ?</p> <p>So If I take 2 different points, the dot product is the same since angle is the same ??</p> <p>How to demonstrate it or translate this statement from a mathematic point of view ?</p> <p>Thanks for your help</p> <p><strong>UPDATE 1 :</strong></p> <p>Thanks for your quick answer. Unfortunately, I am not an expert in tensor calculus but I know some basics like the definition of covariant derivative of a vector $V$ along a geodesic - like with this notation :</p> <p>$\nabla_{i}V^{j}=\partial_{i}V^{j}+V^{k}\Gamma_{ik}^{j}\quad\quad(1)$</p> <p>and the absolute derivative : $D\,V^{j}=(\nabla_{i}V^{j})dx^{i}\quad\quad(2)$</p> <p>Could give me the link between your equation ( $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ ) and the equation (1) or (2).</p> <p>Moreover, you define $Z$ like $$\text{d}\gamma/\text{d}t$$ but after, you only take $\text{d}/\text{d}t$ in :</p> <p>$\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$</p> <p>You say that $Z$ is a vector field : is it an operator or a vector field ?</p> <p>And what about $\langle X,Y\rangle$ ? </p> <p>Can one write :</p> <p>$\langle X,Y\rangle=g_{ij}X^{i}Y^{j}$</p> <p>with $g_{ij}$ the metrics ???</p> <p>Regards</p> <p><strong>UPDATE 2 :</strong> </p> <p>I think there is a little error on index for the third term in first factor, this sould be :</p> <p>$\Big(\dfrac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$</p> <p>and not </p> <p>$\Big(\dfrac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pi}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$</p> <p><strong>UPDATE 3 :</strong> You say "Why ?" above "equal symbol" but you don't give the reason :</p> <p>$\mathrm{D}\langle\xi\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$</p> <p>Could you justify please this expression : $\mathrm{d}(g_{ij}\xi^i\eta^j) = 0$</p> <p>I know that norm of 2 vectors defined by $\text{d}x^{i}$ and $\text{d}x^{j}$ is constant (I mean indepedently of basis used) because :</p> <p>$\text{d}s^2=g_{ij} \text{d}x^{i}\text{d}x^{j}=\text{length}=\text{constant}$</p> <p>Is this the same justification with $\xi^i\eta^j$ vectors ? </p>
[ { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" }, { "answer_id": 2112171, "score": 5, "text": "<p>I can prove the following statement:</p>\n\n<blockquote>\n <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\\Gamma_{ij}^k(x)$ such that\n 1. $\\Gamma_{ij}^k = \\Gamma_{ji}^k$<br>\n 2. Scalar product of two parallel transported vectors is constant along path.</p>\n</blockquote>\n\n<h2>Preliminaries</h2>\n\n<p>Let vectors $\\xi$ and $\\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$.\nIn Euclidean Space (where permanency of scalar product is obvious (why?) ) $\\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity):\n$$\\Gamma_{ij}^{k} = \\frac12g^{kl}\\Big(\\frac{\\partial g_{li}}{\\partial x^j} + \\frac{\\partial g_{lj}}{\\partial x^i} - \\frac{\\partial g_{ij}}{\\partial x^l}\\Big).$$</p>\n\n<h2>Proof of the statement</h2>\n\n<p>$\\rhd$ Starting with the definition of scalar product we can write:\n$$\\langle\\xi,\\eta\\rangle = g_{ij}\\xi^i\\eta^j.$$\nRequiring scalar product to be constant we write:\n$$\\mathrm{D}\\langle\\xi,\\eta\\rangle \\stackrel{why?}{=} \\mathrm{d}\\langle\\xi,\\eta\\rangle = \\mathrm{d}(g_{ij}\\xi^i\\eta^j) = 0.$$\nBy the chain rule we obtain:\n$$\\mathrm{d}g_{ij}\\xi^i\\eta^j + g_{ij}\\mathrm{d}\\xi^{i}\\eta^{j} + g_{ij}\\xi^{i}\\mathrm{d}\\eta^{j} = 0$$ \nBut vectors $\\xi,\\eta$ are transported parallel and then\n$$\\mathrm{d}\\xi^i = -\\Gamma_{pi}^{k}\\xi^i\\mathrm{d}x^{p},$$\nwhere $\\mathrm{d}x^{p}$ - differential along path $P$, $\\eta$ is transported similarly. More over,\n$$\\mathrm{d}g_{ij} = \\frac{\\partial g_{ij}}{\\partial x^p}\\mathrm{d}x^p.$$ \nSo we have (renaming summation indexes)\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big)\\xi^i\\eta^j\\mathrm{d}x^p = 0.$$\nOn the other hand, vectors $\\xi, \\eta$ and path $P$ can be arbitrary, so we have to require:\n$$\\Big(\\frac{\\partial g_{ij}}{\\partial x^p} - g_{kj}\\Gamma_{pi}^k - g_{ik}\\Gamma_{pj}^k\\Big) = 0.$$\nThis equation has the same form as in case of Euclidean Space so we solve this in the same way.\nFrom this and 1st condition of the statement we are able to find desired $\\Gamma_{ij}^{k} \\lhd.$ </p>\n\n<hr>\n\n<h2>About ''why's''</h2>\n\n<ol>\n<li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$.\nThis is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li>\n<li>Why $$\\mathrm{D}\\langle\\xi,\\eta\\rangle = \\mathrm{d}\\langle\\xi,\\eta\\rangle = 0.$$\nI.e. why in this case absolute diferential ($\\mathrm{D}$) is equal to ordinary ($\\mathrm{d}$).\nThis is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li>\n</ol>\n\n<p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\\Gamma$ through metric tensor $g$. In ES we introduce \n$\\Gamma_{ij}^{k}$ from the following expression:\n$$\\frac{\\partial^2 \\vec{x}}{\\partial x^i \\partial x^j} = \\Gamma_{ij}^{k}\\frac{\\partial \\vec{x}}{\\partial x^k},$$\nwhere $\\frac{\\partial \\vec{x}}{\\partial x^k} = \\vec{x}_k$ - local basis in $\\mathbb{R}^n$. Then having a scalar product we have\n$$\\langle x_l, x_{ij}\\rangle = \\Gamma_{ij}^k g_{lk},$$\nwhere we used $\\langle x_l, x_k\\rangle = g_{lk}$. Next it is convenient to denote:\n$$\\Gamma _{l, ij} = g_{lk}\\Gamma_{ij}^k.$$\nNext, we obtain\n$$\\frac{\\partial}{\\partial x^m}\\langle x_l, x_k \\rangle = \\frac{\\partial g_{lk}}{\\partial x^m},$$\nor by the chain rule\n$$\\langle x_{lm}, x_k \\rangle + \\langle x_{l}, x_{km} \\rangle = \\Gamma_{k,lm} + \\Gamma_{l, km} = \\frac{\\partial g_{kl}}{\\partial x^m}.$$\nHere we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have\n$$\\Gamma_{l, km} = \\frac12 \\Big( \\frac{\\partial g_{lk}}{x^m} + \\frac{\\partial g_{lm}}{x^k} - \\frac{\\partial g_{km}}{x^l}\\Big).$$ Raising index $l$ we get desirable result. </p>\n\n<hr>\n\n<p>P.S.you can find this proof in \nRashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p>\n" }, { "answer_id": 2100012, "score": 4, "text": "<p>By definition, the parallel transport of $e \\in T{p}M$ along a path $\\gamma(t), \\gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \\in T_{\\gamma(t)}M$ such that $\\nabla_{\\overset{\\cdot}{\\gamma}}X = 0 $ and $X_0 = e$.</p>\n\n<p>Now, by definiton your connection is compatible with the metric, i.e $Z \\langle X,Y \\rangle = \\langle \\nabla_Z X, Y\\rangle + \\langle X, \\nabla_Z Y\\rangle$ for any vector field $Z$.</p>\n\n<p>Thus taking $Z = d\\gamma/dt$, we obtain that $\\frac{d}{dt}\\langle X,Y \\rangle = \\langle \\nabla_{\\overset{\\cdot}{\\gamma}} X, Y\\rangle + \\langle X, \\nabla_{\\overset{\\cdot}{\\gamma}} Y\\rangle = 0$ since $X,Y$ are parallel vector. Thus $\\langle X, Y \\rangle = \\langle X_0, Y_0 \\rangle$ as wished.</p>\n" } ]
Question: conservation of dot product with parallel transport Answer 1: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 2: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 3: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 4: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 5: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 6: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 7: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 8: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 9: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 10: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 11: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 12: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 13: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 14: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 15: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 16: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 17: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 18: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 19: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 20: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 21: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 22: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 23: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 24: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 25: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 26: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 27: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 28: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 29: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 30: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 31: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 32: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 33: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 34: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p> Answer 35: <p>I can prove the following statement:</p> <blockquote> <p>In Riemannian Space (Riemannian manifold with metric) there exist unique connection $\Gamma_{ij}^k(x)$ such that 1. $\Gamma_{ij}^k = \Gamma_{ji}^k$<br> 2. Scalar product of two parallel transported vectors is constant along path.</p> </blockquote> <h2>Preliminaries</h2> <p>Let vectors $\xi$ and $\eta$ be transported along path $P$. Also, let us denote metric tensor as $g_{ij}(x)$. In Euclidean Space (where permanency of scalar product is obvious (why?) ) $\mathbb{R}^n$ it is possible to introduce curvilinear coordinate system which produces affine connection $\Gamma_{ij}^k$ so as metric tensor (or Gram matrix) $g_{ij}$ (inverse metric we will denote as $g^{kl}$). These two object have the following connection (sorry for ambiguity): $$\Gamma_{ij}^{k} = \frac12g^{kl}\Big(\frac{\partial g_{li}}{\partial x^j} + \frac{\partial g_{lj}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^l}\Big).$$</p> <h2>Proof of the statement</h2> <p>$\rhd$ Starting with the definition of scalar product we can write: $$\langle\xi,\eta\rangle = g_{ij}\xi^i\eta^j.$$ Requiring scalar product to be constant we write: $$\mathrm{D}\langle\xi,\eta\rangle \stackrel{why?}{=} \mathrm{d}\langle\xi,\eta\rangle = \mathrm{d}(g_{ij}\xi^i\eta^j) = 0.$$ By the chain rule we obtain: $$\mathrm{d}g_{ij}\xi^i\eta^j + g_{ij}\mathrm{d}\xi^{i}\eta^{j} + g_{ij}\xi^{i}\mathrm{d}\eta^{j} = 0$$ But vectors $\xi,\eta$ are transported parallel and then $$\mathrm{d}\xi^i = -\Gamma_{pi}^{k}\xi^i\mathrm{d}x^{p},$$ where $\mathrm{d}x^{p}$ - differential along path $P$, $\eta$ is transported similarly. More over, $$\mathrm{d}g_{ij} = \frac{\partial g_{ij}}{\partial x^p}\mathrm{d}x^p.$$ So we have (renaming summation indexes) $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big)\xi^i\eta^j\mathrm{d}x^p = 0.$$ On the other hand, vectors $\xi, \eta$ and path $P$ can be arbitrary, so we have to require: $$\Big(\frac{\partial g_{ij}}{\partial x^p} - g_{kj}\Gamma_{pi}^k - g_{ik}\Gamma_{pj}^k\Big) = 0.$$ This equation has the same form as in case of Euclidean Space so we solve this in the same way. From this and 1st condition of the statement we are able to find desired $\Gamma_{ij}^{k} \lhd.$ </p> <hr> <h2>About ''why's''</h2> <ol> <li>Why in Euclidean Space it is obvious that scalar product of two vectors remains constant while parallel transported along path $P$. This is because in Euclidean Space you can choose Cartesian coordinate system where <em>coordinates</em> of a vector remain constant during parallel transport.</li> <li>Why $$\mathrm{D}\langle\xi,\eta\rangle = \mathrm{d}\langle\xi,\eta\rangle = 0.$$ I.e. why in this case absolute diferential ($\mathrm{D}$) is equal to ordinary ($\mathrm{d}$). This is because scalar product is a scalar i.e. has no indexes (but no every quantity that has no indexes is invariant).</li> </ol> <p>Now about the same form as in Euclidean Space. Our purpose here is to express connection $\Gamma$ through metric tensor $g$. In ES we introduce $\Gamma_{ij}^{k}$ from the following expression: $$\frac{\partial^2 \vec{x}}{\partial x^i \partial x^j} = \Gamma_{ij}^{k}\frac{\partial \vec{x}}{\partial x^k},$$ where $\frac{\partial \vec{x}}{\partial x^k} = \vec{x}_k$ - local basis in $\mathbb{R}^n$. Then having a scalar product we have $$\langle x_l, x_{ij}\rangle = \Gamma_{ij}^k g_{lk},$$ where we used $\langle x_l, x_k\rangle = g_{lk}$. Next it is convenient to denote: $$\Gamma _{l, ij} = g_{lk}\Gamma_{ij}^k.$$ Next, we obtain $$\frac{\partial}{\partial x^m}\langle x_l, x_k \rangle = \frac{\partial g_{lk}}{\partial x^m},$$ or by the chain rule $$\langle x_{lm}, x_k \rangle + \langle x_{l}, x_{km} \rangle = \Gamma_{k,lm} + \Gamma_{l, km} = \frac{\partial g_{kl}}{\partial x^m}.$$ Here we have one equation with two unknown quantities, but if we permute indices cyclically, we obtain two more equations with only one <em>new</em> unknown. Solving we have $$\Gamma_{l, km} = \frac12 \Big( \frac{\partial g_{lk}}{x^m} + \frac{\partial g_{lm}}{x^k} - \frac{\partial g_{km}}{x^l}\Big).$$ Raising index $l$ we get desirable result. </p> <hr> <p>P.S.you can find this proof in Rashevsky's ''Riemannian geometry and tensor analysis.'' Though I do not know if there exists English version of this book.</p> Answer 36: <p>By definition, the parallel transport of $e \in T{p}M$ along a path $\gamma(t), \gamma(0) = p$ is the unique vector fields $X_t$ with $X_t \in T_{\gamma(t)}M$ such that $\nabla_{\overset{\cdot}{\gamma}}X = 0 $ and $X_0 = e$.</p> <p>Now, by definiton your connection is compatible with the metric, i.e $Z \langle X,Y \rangle = \langle \nabla_Z X, Y\rangle + \langle X, \nabla_Z Y\rangle$ for any vector field $Z$.</p> <p>Thus taking $Z = d\gamma/dt$, we obtain that $\frac{d}{dt}\langle X,Y \rangle = \langle \nabla_{\overset{\cdot}{\gamma}} X, Y\rangle + \langle X, \nabla_{\overset{\cdot}{\gamma}} Y\rangle = 0$ since $X,Y$ are parallel vector. Thus $\langle X, Y \rangle = \langle X_0, Y_0 \rangle$ as wished.</p>
Arrangements of balls from a bag
2,100,546
9
https://math.stackexchange.com/questions/2100546/arrangements-of-balls-from-a-bag
<p>A bag contains $2$ red, $2$ green, $2$ black, $3$ white, $1$ yellow, $1$ brown and $1$ orange ball. </p> <p>In how many ways the balls be arranged in a row by taking all of them together if balls of same color cannot be placed adjacent?</p> <p>I am aware that $\dfrac{12!}{2!\times 2! \times 2! \times 3!}$ arrangements are possible. But these arrangements contain balls of same colors placed adjacent also. How to solve this problem?</p>
[ { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" }, { "answer_id": 2101434, "score": 5, "text": "<p>This answer is based upon a generating function of <em><a href=\"https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials\" rel=\"noreferrer\">generalized Laguerre polynomials</a></em>\n\\begin{align*}\n L_k^{(\\alpha)}(t)=\\sum_{i=0}^k(-1)^k\\binom{k+\\alpha}{k-i}\\frac{t^i}{i!}\\tag{1}\n \\end{align*}</p>\n\n<p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href=\"https://arxiv.org/pdf/1306.6232v1.pdf\" rel=\"noreferrer\">Counting words with Laguerre series</a></em> by Jair Taylor. </p>\n\n<blockquote>\n <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters\n \\begin{align*}\n\\{r,g,b,w,y,B,o\\}\n\\end{align*}\n and are looking for words of length $12$ built from\n \\begin{align*}\nr,r,g,g,b,b,w,w,w,y,B,o\n\\end{align*}\n which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p>\n</blockquote>\n\n<p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function\n\\begin{align*}\n\\sum_{k=0}^\\infty l_k(t)x^k=e^{\\frac{tx}{1+x}}\n\\end{align*}\nThe first few such polynomials are\n\\begin{align*}\nl_0(t)&amp;=1\\\\\nl_1(t)&amp;=t\\\\\nl_2(t)&amp;=\\frac{1}{2}t^2-t\\\\\nl_3(t)&amp;=\\frac{1}{6}t^3-t^2+t\\tag{2}\n\\end{align*}\nThese polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p>\n\n<blockquote>\n <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is\n \\begin{align*}\n\\int_{0}^\\infty e^{-t}\\left(\\prod_{i=1}^kl_{n_i}(t)\\right)\\,dt\\tag{3}\n\\end{align*}</p>\n \n <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set\n \\begin{align*}\n&amp;n_1=n_2=n_3=2,\\\\\n&amp;n_4=3,\\\\\n&amp;n_5=n_6=n_7=1\n\\end{align*}</p>\n \n <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha\n \\begin{align*}\n\\int_{0}^\\infty&amp;e^{-t}\\left(\\prod_{i=1}^7l_{n_i}(t)\\right)\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(l_2(t)\\right)^3l_3(t)\\left(l_1(t)\\right)^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{2}t^2-t\\right)^3\\left(\\frac{1}{6}t^3-t^2+t\\right)t^3\\,dt\\\\\n&amp;=\\int_{0}^\\infty e^{-t}\\left(\\frac{1}{48}t^{12}-\\frac{1}{4}t^{11}+\\frac{9}{8}t^{10}\n-\\frac{29}{12}t^9+\\frac{5}{2}t^8-t^7\\right)\\,dt\\\\\n&amp;=3301200\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2100600, "score": 1, "text": "<p>This question is equivalent to </p>\n\n<blockquote>\n <p>\"In how many different ways can we arrange letters of the word\n AABBCCDDDEFG such that no two adjacent letters are alike?\"</p>\n</blockquote>\n\n<p>I have been doing many such questions in past. This <a href=\"http://www.careerbless.com/calculators/word/index.php\" rel=\"nofollow noreferrer\">tool</a> was very useful. Following is the result obtained from this tool.<br>\n[see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p>\n\n<blockquote>\n <p><a href=\"https://i.stack.imgur.com/DGheA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/DGheA.png\" alt=\"enter image description here\"></a></p>\n</blockquote>\n" } ]
Question: Arrangements of balls from a bag Answer 1: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 2: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 3: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 4: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 5: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 6: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 7: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 8: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 9: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 10: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 11: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 12: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 13: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 14: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 15: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 16: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 17: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 18: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 19: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 20: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 21: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 22: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 23: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 24: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 25: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 26: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 27: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 28: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 29: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 30: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 31: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 32: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 33: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 34: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote> Answer 35: <p>This answer is based upon a generating function of <em><a href="https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials" rel="noreferrer">generalized Laguerre polynomials</a></em> \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!}\tag{1} \end{align*}</p> <p>The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in <em><a href="https://arxiv.org/pdf/1306.6232v1.pdf" rel="noreferrer">Counting words with Laguerre series</a></em> by Jair Taylor. </p> <blockquote> <p>We encode the colors (r)ed, (g)reen, (b)lack, (w)hite, (y)ellow, (B)rown, (o)range of the balls with the letters \begin{align*} \{r,g,b,w,y,B,o\} \end{align*} and are looking for words of length $12$ built from \begin{align*} r,r,g,g,b,b,w,w,w,y,B,o \end{align*} which have the property that they contain no consecutive equal letters. These words are called <strong>Carlitz words</strong> or <strong>Smirnov words</strong>.</p> </blockquote> <p>We find in section 2 of the referred paper Laguerre polynomials $l_k(t)$ defined by their generating function \begin{align*} \sum_{k=0}^\infty l_k(t)x^k=e^{\frac{tx}{1+x}} \end{align*} The first few such polynomials are \begin{align*} l_0(t)&amp;=1\\ l_1(t)&amp;=t\\ l_2(t)&amp;=\frac{1}{2}t^2-t\\ l_3(t)&amp;=\frac{1}{6}t^3-t^2+t\tag{2} \end{align*} These polynomials are a specific form of Laguerre polynomials (1), namely $$l_k(t)=(-1)^kL_k^{(-1)}(t)$$</p> <blockquote> <p><strong>Theorem 2.1</strong> in the referred paper states: Given nonnegative integers $n_1,\ldots,n_k$, the number of $k$-ary Carlitz words with the letter $i$ used exactly $n_i$ times is \begin{align*} \int_{0}^\infty e^{-t}\left(\prod_{i=1}^kl_{n_i}(t)\right)\,dt\tag{3} \end{align*}</p> <p>Since we have three characters $r,g,b$ each twice, one character $w$ three times and three characters $y,B,o$ each occurring once, we set \begin{align*} &amp;n_1=n_2=n_3=2,\\ &amp;n_4=3,\\ &amp;n_5=n_6=n_7=1 \end{align*}</p> <p>We apply theorem 2.1. and obtain using (2) and (3) and with some help of Wolfram Alpha \begin{align*} \int_{0}^\infty&amp;e^{-t}\left(\prod_{i=1}^7l_{n_i}(t)\right)\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(l_2(t)\right)^3l_3(t)\left(l_1(t)\right)^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{2}t^2-t\right)^3\left(\frac{1}{6}t^3-t^2+t\right)t^3\,dt\\ &amp;=\int_{0}^\infty e^{-t}\left(\frac{1}{48}t^{12}-\frac{1}{4}t^{11}+\frac{9}{8}t^{10} -\frac{29}{12}t^9+\frac{5}{2}t^8-t^7\right)\,dt\\ &amp;=3301200 \end{align*}</p> </blockquote> Answer 36: <p>This question is equivalent to </p> <blockquote> <p>"In how many different ways can we arrange letters of the word AABBCCDDDEFG such that no two adjacent letters are alike?"</p> </blockquote> <p>I have been doing many such questions in past. This <a href="http://www.careerbless.com/calculators/word/index.php" rel="nofollow noreferrer">tool</a> was very useful. Following is the result obtained from this tool.<br> [see generated question $24$. Thought of retying it here. but as i said in my comment, it is very long and pasting the screenshot here]</p> <blockquote> <p><a href="https://i.stack.imgur.com/DGheA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGheA.png" alt="enter image description here"></a></p> </blockquote>
Let $f: [0 ,1] \to \mathbb{R} $ be continuous, prove $\lim_{n\to \infty} \int_0^1 f(x^n)dx = f(0)$
2,100,975
9
https://math.stackexchange.com/questions/2100975/let-f-0-1-to-mathbbr-be-continuous-prove-lim-n-to-infty-int-0
<p>Let $f: [0,1] \to \mathbb{R}$ be continuous, prove $\lim_{n\to \infty} \int_0^1 f(x^n)dx = f(0)$</p> <p>This makes some sense looking at it. I have only the Regulated Integral definition of integration to work with :<a href="https://en.wikipedia.org/wiki/Regulated_integral" rel="noreferrer">https://en.wikipedia.org/wiki/Regulated_integral</a></p> <p>It uses sequences of step functions with a partition over a closed interval.</p> <p>So I need to solve the integral prior to taking the limit but not sure how to really get what I need. Since $f $ is continuous function there exists a sequence $(\varphi_n)_{n \in \mathbb{N}}$ of step functions such that $\lim_{n \to \infty} \sup_{x \in [0,1]} \mid f(x) - \varphi_n(x)\mid \,= 0$. Moreover, $\int_0^1 f(x)dx := \lim_{m \to \infty} \int_0^1 \varphi_m(x)dx $ so it should be the case that $\int_0^1 f(x^n)dx := \lim_{m \to \infty} \int_0^1 \varphi_m(x^n)dx $ so I assume that it would be that $\lim_{n \to \infty } \int_0^1 f(x^n)dx := \lim_{n \to\infty} (\lim_{n \to \infty} \int_0^1 \varphi_m(x^n)dx )$</p> <p>Now, $ \int_a^b \varphi(\eta)d\eta := \sum_{j=0}^{N}\varphi(\eta_j)(\sigma_{j+1}-\sigma_j)$ where $(\sigma_j)_{j=0}^{N+1})$ is a partition of $[a,b]$ and $\eta_{j} \in (\sigma_j,\sigma_{j+1})$ such that each block of the partition is constant so choice of $\eta_j$ is not particularly important.</p> <p>So should have $ lim_{n \to \infty}(lim_{m \to \infty}\int_a^b \varphi_m(\eta^n)d\eta := lim_{n \to \infty}(\lim_{m \to \infty}\sum_{j=0}^{N}\varphi_m(\eta_j^n)(\sigma_{j+1}-\sigma_j))$.</p> <p>This is where I get stuck.</p>
[ { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" }, { "answer_id": 2100992, "score": 9, "text": "<p>As I mentioned in a comment, if you allow the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"noreferrer\">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \\stackrel{\\rm def}{=} f(x^n)$ ($x\\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\\stackrel{\\rm def}{=} \\lVert f\\rVert_\\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get\n$$\n\\int_0^1 f_n \\xrightarrow[n\\to\\infty]{} \\int_0^1 \\lim_{n\\to\\infty}f_n = \\int_0^1 f(0)dx = f(0).\n$$</p>\n\n<p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p>\n\n<hr>\n\n<p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\\stackrel{\\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p>\n\n<p>Fix $\\varepsilon&gt;0$, break the interval in 2:\n$$\n\\int_0^1 f(x^n)dx = \\int_0^a f(x^n)dx +\\int_a^1 f(x^n)dx $$ for some convenient $a=a(\\varepsilon, f) \\stackrel{\\rm def}{=\n} 1-\\frac{\\varepsilon}{2\\lVert f\\rVert_\\infty}$. The second part has absolute value at most $(1-a)\\cdot\\lVert f\\rVert_\\infty &lt; \\frac{\\varepsilon}{2}$ by definition.</p>\n\n<p>As for the first, we use continuity of $f$ at $0$: let $\\delta_\\varepsilon&gt;0$ be such that $\\lvert u\\rvert \\leq \\delta_\\varepsilon$ implies $\\lvert f(u)\\rvert \\leq \\frac{\\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\\geq N_{\\varepsilon}$, for some $N_{\\varepsilon}\\geq 0$, we have $\\lvert x^n\\rvert \\leq \\lvert a^n\\rvert \\leq \\delta_\\varepsilon$. Thus, the first term can be bounded as\n$$\n\\left\\lvert \\int_0^a f(x^n)dx\\right\\rvert \\leq \\int_0^a \\left\\lvert f(x^n)\\right\\rvert dx\n\\leq \\int_0^a \\frac{\\varepsilon}{2} dx \\leq \\frac{\\varepsilon}{2}\n$$\nfor any $n\\geq N_{\\varepsilon}$.</p>\n\n<p>Putting it together, for $n\\geq N_{\\varepsilon}$ one has \n$$\n\\left\\lvert \\int_0^1 f(x^n)dx\\right\\rvert \\leq \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon $$\nby the triangle inequality. This shows that\n$$\n\\int_0^1 f(x^n)dx\\xrightarrow[n\\to\\infty]{} 0\n$$\nand from there the general case follows.</p>\n" }, { "answer_id": 2101015, "score": 4, "text": "<p>Since $f $ is continuous, given $\\varepsilon&gt;0$ there exists $\\delta&gt;0$ such that $|f (x)-f (0)|&lt;\\varepsilon/2 $ whenever $|x|&lt;\\delta $. Let $M=\\max \\{|f (x)|:\\ x\\in [0,1]\\}$. Choose $x_0\\in (0,1) $ such that $1-x_0 &lt;\\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\\delta $ . Then, for any $n\\geq n_0$,</p>\n\n<p>\\begin{align}\n\\left|\\int_0^1f (x^n)\\,dx-f (0)\\right|\n&amp;=\\left|\\int_0^1(f (x^n)-f (0))\\,dx\\right| \n\\leq \\int_0^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;= \\int_0^{x_0}|f (x^n)-f (0)|\\,dx\n+ \\int_{x_0}^1|f (x^n)-f (0)|\\,dx \\\\ \\ \\\\\n&amp;\\leq \\frac\\varepsilon2 x_0+2M (1-x_0)\\\\ \\ \\\\\n&amp;&lt;\\frac\\varepsilon 2+\\frac\\varepsilon 2=\\varepsilon. \n\\end{align}</p>\n\n<hr>\n\n<p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p>\n" }, { "answer_id": 4383938, "score": 1, "text": "<p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p>\n<p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p>\n<blockquote>\n<p><em>Theorem</em> 1. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is Lebesgue integrable\nand that <span class=\"math-container\">$f$</span> is continuous at <span class=\"math-container\">$0$</span>. Then\n<span class=\"math-container\">$$ \\left| \\int_0^1 f(x^n)\\,dx - f(0)\\right| = \\left| \\int_0^1 [f(x^n)-f(0)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0)|\\,dx \\to 0 \\tag{1} $$</span>\nas <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class=\"math-container\">$\\{f(x^n)-f(0)\\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p>\n<p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p>\n<blockquote>\n<p><em>Theorem</em> 2. Suppose that <span class=\"math-container\">$f:[0,1]\\to\\mathbb R$</span> is regulated. Then <span class=\"math-container\">$$\n \\left| \\int_0^1 f(x^n)\\,dx - f(0+)\\right| = \\left| \\int_0^1\n [f(x^n)-f(0+)]\\,dx \\right| \\leq \\int_0^1|f(x^n)-f(0+)|\\,dx \\to 0\n \\tag{2} $$</span> as <span class=\"math-container\">$n\\to \\infty$</span>.</p>\n</blockquote>\n<p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p>\n<p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class=\"math-container\">$\\{f_n\\}$</span> is a sequence of Riemann integrable functions on an interval <span class=\"math-container\">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class=\"math-container\">$f$</span>. Then\n<span class=\"math-container\">$ \\lim_{n\\rightarrow \\infty}\\int_a^b f_n(x)\\,dx = \\int_a^b f(x)\\,dx.$</span></p>\n<p><em>Proof of Theorem 2.</em> The sequence <span class=\"math-container\">$\\{f(x^n)-f(0+)\\}$</span> converges to zero pointwise on <span class=\"math-container\">$(0,1)$</span> since <span class=\"math-container\">$f$</span> is regulated. It is constant at <span class=\"math-container\">$x=0$</span> and <span class=\"math-container\">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p>\n<hr />\n<p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>\n" } ]
Question: Let $f: [0 ,1] \to \mathbb{R} $ be continuous, prove $\lim_{n\to \infty} \int_0^1 f(x^n)dx = f(0)$ Answer 1: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 2: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 3: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 4: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 5: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 6: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 7: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 8: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 9: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 10: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 11: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 12: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 13: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 14: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 15: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 16: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 17: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 18: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 19: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 20: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 21: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 22: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 23: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 24: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 25: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 26: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 27: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 28: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 29: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 30: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 31: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 32: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 33: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 34: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 35: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 36: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 37: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 38: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 39: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 40: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 41: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 42: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 43: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 44: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 45: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 46: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 47: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 48: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 49: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 50: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 51: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p> Answer 52: <p>As I mentioned in a comment, if you allow the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="noreferrer">Dominated Convergence Theorem</a> then this is a matter of a few lines. The sequence of functions $(f_n)_n$ defined by $f_n(x) \stackrel{\rm def}{=} f(x^n)$ ($x\in[0,1])$ converges pointwise <em>(by continuity of $f$)</em> to the constant function $f(0)$ on $[0,1)$, and one can uniformly dominate all $f_n$'s by the integrable (because constant) function $g\stackrel{\rm def}{=} \lVert f\rVert_\infty$ <em>($f$ being bounded on $[0,1]$, since it is continuous).</em> Now, by the DCT we get $$ \int_0^1 f_n \xrightarrow[n\to\infty]{} \int_0^1 \lim_{n\to\infty}f_n = \int_0^1 f(0)dx = f(0). $$</p> <p><em>But this is overkill</em>. Another option, explored below, would be to use continuity of $f$ at $0$: recall that $f$ being continuous on a closed interval, it is bounded.</p> <hr> <p>Without loss of generality, we can assume $f(0) = 0$ <em>(indeed, just apply the same exercise to $g\stackrel{\rm def}{=} f-f(0)$. It is easy to check that this will be sufficient to prove the result).</em></p> <p>Fix $\varepsilon&gt;0$, break the interval in 2: $$ \int_0^1 f(x^n)dx = \int_0^a f(x^n)dx +\int_a^1 f(x^n)dx $$ for some convenient $a=a(\varepsilon, f) \stackrel{\rm def}{= } 1-\frac{\varepsilon}{2\lVert f\rVert_\infty}$. The second part has absolute value at most $(1-a)\cdot\lVert f\rVert_\infty &lt; \frac{\varepsilon}{2}$ by definition.</p> <p>As for the first, we use continuity of $f$ at $0$: let $\delta_\varepsilon&gt;0$ be such that $\lvert u\rvert \leq \delta_\varepsilon$ implies $\lvert f(u)\rvert \leq \frac{\varepsilon}{2}$. We then rely on the fact that for $n$ big enough (i.e., for $n\geq N_{\varepsilon}$, for some $N_{\varepsilon}\geq 0$, we have $\lvert x^n\rvert \leq \lvert a^n\rvert \leq \delta_\varepsilon$. Thus, the first term can be bounded as $$ \left\lvert \int_0^a f(x^n)dx\right\rvert \leq \int_0^a \left\lvert f(x^n)\right\rvert dx \leq \int_0^a \frac{\varepsilon}{2} dx \leq \frac{\varepsilon}{2} $$ for any $n\geq N_{\varepsilon}$.</p> <p>Putting it together, for $n\geq N_{\varepsilon}$ one has $$ \left\lvert \int_0^1 f(x^n)dx\right\rvert \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon $$ by the triangle inequality. This shows that $$ \int_0^1 f(x^n)dx\xrightarrow[n\to\infty]{} 0 $$ and from there the general case follows.</p> Answer 53: <p>Since $f $ is continuous, given $\varepsilon&gt;0$ there exists $\delta&gt;0$ such that $|f (x)-f (0)|&lt;\varepsilon/2 $ whenever $|x|&lt;\delta $. Let $M=\max \{|f (x)|:\ x\in [0,1]\}$. Choose $x_0\in (0,1) $ such that $1-x_0 &lt;\varepsilon/2M $. Choose $n_0$ such that $|x_0^{n_0}|&lt;\delta $ . Then, for any $n\geq n_0$,</p> <p>\begin{align} \left|\int_0^1f (x^n)\,dx-f (0)\right| &amp;=\left|\int_0^1(f (x^n)-f (0))\,dx\right| \leq \int_0^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;= \int_0^{x_0}|f (x^n)-f (0)|\,dx + \int_{x_0}^1|f (x^n)-f (0)|\,dx \\ \ \\ &amp;\leq \frac\varepsilon2 x_0+2M (1-x_0)\\ \ \\ &amp;&lt;\frac\varepsilon 2+\frac\varepsilon 2=\varepsilon. \end{align}</p> <hr> <p>As an aside, if you have the Fundamental Theorem of Calculus available, the substitution $u=x^n $ does the job.</p> Answer 54: <p>This question popped up recently (2022) and we were redirected here as it was flagged as a <em>duplicate</em>. So here is another two cents opinion. (Not worth more.)</p> <p>As evident here it is difficult for advanced students not to suggest that this is immediate from the Lebesgue dominated convergence theorem. Indeed, while this is stated for continuous functions (with a plea to use the regulated integral) it is true in greater generality:</p> <blockquote> <p><em>Theorem</em> 1. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is Lebesgue integrable and that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0)\right| = \left| \int_0^1 [f(x^n)-f(0)]\,dx \right| \leq \int_0^1|f(x^n)-f(0)|\,dx \to 0 \tag{1} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>The proof (as most will rush to assert) follows from the fact that the sequence of functions <span class="math-container">$\{f(x^n)-f(0)\}$</span> converges a.e. to zero and is dominated by a Lebesgue integrable function. Hence Lebesgue's dominated convergence theorem applies.</p> <p>Too advanced? Well yes for students restricted to the regulated integral or the archaic Riemann integral. But there is a well-known theorem that is not advanced, is not modern (it belongs in the 19th century) and it is adequate to prove this version.</p> <blockquote> <p><em>Theorem</em> 2. Suppose that <span class="math-container">$f:[0,1]\to\mathbb R$</span> is regulated. Then <span class="math-container">$$ \left| \int_0^1 f(x^n)\,dx - f(0+)\right| = \left| \int_0^1 [f(x^n)-f(0+)]\,dx \right| \leq \int_0^1|f(x^n)-f(0+)|\,dx \to 0 \tag{2} $$</span> as <span class="math-container">$n\to \infty$</span>.</p> </blockquote> <p>While we cannot appeal to Lebesgue's version of a convergence theorem there is an earlier weaker version known as the <strong>Arzelà-Osgood bounded convergence theorem</strong>. Osgood proved it for continuous functions and Arzelà for Riemann integrable functions.</p> <p><em>Theorem</em> [Arzelà-Osgood] Suppose that <span class="math-container">$\{f_n\}$</span> is a sequence of Riemann integrable functions on an interval <span class="math-container">$[a,b]$</span> that is (i) uniformly bounded and (ii) converges pointwise to another Riemann integrable function <span class="math-container">$f$</span>. Then <span class="math-container">$ \lim_{n\rightarrow \infty}\int_a^b f_n(x)\,dx = \int_a^b f(x)\,dx.$</span></p> <p><em>Proof of Theorem 2.</em> The sequence <span class="math-container">$\{f(x^n)-f(0+)\}$</span> converges to zero pointwise on <span class="math-container">$(0,1)$</span> since <span class="math-container">$f$</span> is regulated. It is constant at <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. Thus we have it converging pointwise everywhere to another regulated function whose integral is certainly zero. Regulated functions are bounded and hence this sequence is uniformly bounded. Accordingly the Arzelà-Osgood completes the proof.</p> <hr /> <p><strong>Didn't learn the Arzelà-Osgood theorem?</strong> Shame on your instructor. Shame on your textbook. As long as the powers that be have determined that students learn the Riemann integral or its more attractive cousin, the regulated integral, rather than the Lebesgue integral, they should teach the full theory.</p>
What&#39;s the derivative of: $ \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$?
2,101,745
9
https://math.stackexchange.com/questions/2101745/whats-the-derivative-of-sqrtx-sqrtx-sqrtx-cdots
<p>let $ y=\displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$, i'm really interesting to know how do I find :</p> <p>$\displaystyle \frac{dy}{dx}$ ?.</p> <p><strong>Note:</strong> I have used the definition of derivative of the square root function but i don't succed .</p> <p>Thank you for any help </p>
[ { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" }, { "answer_id": 2101747, "score": 11, "text": "<p>I am going to assume that the given problem is over $\\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\\sqrt{x}$.</p>\n\n<p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation\n$$ 2\\,f'(x)\\,f(x) = 1 + f'(x) $$\nand:\n$$ \\frac{d}{dx}\\,f(x) = \\frac{1}{2\\,f(x)-1}.$$</p>\n" }, { "answer_id": 2101752, "score": 2, "text": "<p>$ y = \\sqrt{x+\\left(\\sqrt{{x}+\\sqrt{x+\\cdots}}\\right)}$</p>\n\n<p>$y = \\sqrt{x + y}$</p>\n\n<p>Then $y^2 = x + y$</p>\n\n<p>Now find derivative.</p>\n\n<p>$2y\\frac{dy}{dx} = 1 + \\frac{dy}{dx}$</p>\n\n<p>$2y\\frac{dy}{dx} - \\frac{dy}{dx} = 1$</p>\n\n<p>$(2y - 1)\\frac{dy}{dx} = 1$</p>\n\n<p>$\\frac{dy}{dx} = \\frac{1}{2y - 1}$</p>\n" }, { "answer_id": 2101760, "score": 0, "text": "<p>Let$$y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$$\nIts an infinite series so adding one term more will have not affect its value</p>\n\n<p>$$y = \\displaystyle \\sqrt{x+\\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}}$$</p>\n\n<p>From above equations\n$$y = \\sqrt{x+y}$$\n$$y^2 = x+y$$\nDifferentiating w.r.t. $x$, we get\n$$2y\\frac{dy}{dx}=1+\\frac{dy}{dx}$$\n$$(2y-1)\\frac{dy}{dx} = 1$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$\n$$\\frac{dy}{dx}=\\frac{1}{2y-1}$$</p>\n\n<p>where $y = \\displaystyle \\sqrt{x+\\sqrt{{x}+\\sqrt{x+\\cdots}}}$</p>\n" }, { "answer_id": 4509838, "score": 0, "text": "<p>Clarification to solve the question <br />\nThe question posed is calcule Derivative <span class=\"math-container\">$\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}$</span> <br />\nwe put <span class=\"math-container\">$y=\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}} $</span> <br />\nBefore calculating the derivative we simplify the expression <span class=\"math-container\">$y$</span><br />\n<span class=\"math-container\">$y^{2}=x+\\sqrt{x+\\sqrt{x+\\sqrt{x+....}}}=x+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-x=0$</span><br />\nWe solve the equation <span class=\"math-container\">$y^{2}-y-x=0$</span> <br />\nThe solution to this equation is <span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4x}}{2}$</span><br />\nAfter simplifying the relationship y we calculate <span class=\"math-container\">$\\frac{dy}{dx}$</span><br />\nwe've got <span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}.\\frac{4}{2\\sqrt{1+4x}}=\\frac{1}{\\sqrt{1+4x}}$</span><br />\nIn the general case, if\n<span class=\"math-container\">$y=\\sqrt{f(x)+\\sqrt{f(x)+\\sqrt{f(x)+...}}}$</span><br />\n<span class=\"math-container\">$\\Rightarrow y^{2}=f(x)+y$</span><br />\n<span class=\"math-container\">$y^{2}-y-f(x)=0$</span><br />\n<span class=\"math-container\">$y=\\frac{-b+\\sqrt{\\bigtriangleup }}{2a}=\\frac{1+\\sqrt{1+4f(x)}}{2}$</span><br />\n<span class=\"math-container\">$\\frac{dy}{dx}=\\frac{1}{2}\\frac{4\\acute{f}(x)}{2\\sqrt{1+4f(x)}}=\\frac{\\acute{f}(x)}{\\sqrt{1+4f(x)}}$</span></p>\n" } ]
Question: What&#39;s the derivative of: $ \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$? Answer 1: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 2: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 3: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 4: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 5: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 6: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 7: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 8: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 9: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 10: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 11: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 12: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 13: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 14: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 15: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 16: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 17: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 18: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 19: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 20: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 21: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 22: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 23: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 24: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 25: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 26: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 27: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 28: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 29: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 30: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 31: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 32: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 33: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 34: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 35: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 36: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 37: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 38: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 39: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 40: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 41: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 42: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 43: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 44: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 45: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 46: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 47: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 48: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 49: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 50: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 51: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 52: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 53: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 54: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 55: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 56: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 57: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 58: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 59: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 60: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 61: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 62: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 63: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 64: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 65: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 66: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 67: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 68: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p> Answer 69: <p>I am going to assume that the given problem is over $\mathbb{R}^+$, otherwise the definition of $f$ makes no sense. Over $\mathbb{R}^+$, the given function is differentiable by the concavity of $g(x)=\sqrt{x}$.</p> <p>Such function fulfills $f(x)^2 = x+f(x)$, hence by termwise differentiation $$ 2\,f'(x)\,f(x) = 1 + f'(x) $$ and: $$ \frac{d}{dx}\,f(x) = \frac{1}{2\,f(x)-1}.$$</p> Answer 70: <p>$ y = \sqrt{x+\left(\sqrt{{x}+\sqrt{x+\cdots}}\right)}$</p> <p>$y = \sqrt{x + y}$</p> <p>Then $y^2 = x + y$</p> <p>Now find derivative.</p> <p>$2y\frac{dy}{dx} = 1 + \frac{dy}{dx}$</p> <p>$2y\frac{dy}{dx} - \frac{dy}{dx} = 1$</p> <p>$(2y - 1)\frac{dy}{dx} = 1$</p> <p>$\frac{dy}{dx} = \frac{1}{2y - 1}$</p> Answer 71: <p>Let$$y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$$ Its an infinite series so adding one term more will have not affect its value</p> <p>$$y = \displaystyle \sqrt{x+\sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}}$$</p> <p>From above equations $$y = \sqrt{x+y}$$ $$y^2 = x+y$$ Differentiating w.r.t. $x$, we get $$2y\frac{dy}{dx}=1+\frac{dy}{dx}$$ $$(2y-1)\frac{dy}{dx} = 1$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$ $$\frac{dy}{dx}=\frac{1}{2y-1}$$</p> <p>where $y = \displaystyle \sqrt{x+\sqrt{{x}+\sqrt{x+\cdots}}}$</p> Answer 72: <p>Clarification to solve the question <br /> The question posed is calcule Derivative <span class="math-container">$\sqrt{x+\sqrt{x+\sqrt{x+....}}}$</span> <br /> we put <span class="math-container">$y=\sqrt{x+\sqrt{x+\sqrt{x+....}}} $</span> <br /> Before calculating the derivative we simplify the expression <span class="math-container">$y$</span><br /> <span class="math-container">$y^{2}=x+\sqrt{x+\sqrt{x+\sqrt{x+....}}}=x+y$</span><br /> <span class="math-container">$y^{2}-y-x=0$</span><br /> We solve the equation <span class="math-container">$y^{2}-y-x=0$</span> <br /> The solution to this equation is <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4x}}{2}$</span><br /> After simplifying the relationship y we calculate <span class="math-container">$\frac{dy}{dx}$</span><br /> we've got <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}.\frac{4}{2\sqrt{1+4x}}=\frac{1}{\sqrt{1+4x}}$</span><br /> In the general case, if <span class="math-container">$y=\sqrt{f(x)+\sqrt{f(x)+\sqrt{f(x)+...}}}$</span><br /> <span class="math-container">$\Rightarrow y^{2}=f(x)+y$</span><br /> <span class="math-container">$y^{2}-y-f(x)=0$</span><br /> <span class="math-container">$y=\frac{-b+\sqrt{\bigtriangleup }}{2a}=\frac{1+\sqrt{1+4f(x)}}{2}$</span><br /> <span class="math-container">$\frac{dy}{dx}=\frac{1}{2}\frac{4\acute{f}(x)}{2\sqrt{1+4f(x)}}=\frac{\acute{f}(x)}{\sqrt{1+4f(x)}}$</span></p>
Is there any continuous function that satisfies the following conditions?
2,102,307
9
https://math.stackexchange.com/questions/2102307/is-there-any-continuous-function-that-satisfies-the-following-conditions
<p>Is there any continuous function $y=f(x)$ on $[0,1]$ that satisfies the following conditions? $$ f(0) = f(1) = 0, $$ and $$ f(a)^2-f(2a)f(b)+f(b)^2&lt;0, $$ for some $0&lt;a&lt;b&lt;1$.</p> <p>I tried to test with several functions (with different $a,b$) but non of them satisfied. </p> <p>Any help is appreciated. Thank you in advance.</p>
[ { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" }, { "answer_id": 2102322, "score": 16, "text": "<p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p>\n\n<p>So try</p>\n\n<p>$$\nf(x) = x(1-x)e^{-20(x-\\frac12)^2}\n$$\nwith $a=\\frac14$ and $b=\\frac34$. </p>\n\n<p>There, \n$$f(a)^2 - f(2a)f(b) + f(b)^2 \\approx -0.00766 &lt;0$$</p>\n" }, { "answer_id": 2102348, "score": 16, "text": "<p>All you have to do is choose numbers $a,b$ so that $0\\lt a\\lt 2a\\lt b\\lt1$ (or $0\\lt a\\lt b\\lt2a\\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\\ f(a)=y_1,\\ f(2a)=y_2,\\ f(b)=y_3.$ For example:</p>\n\n<p>$$f(x)=x(x-1)(4x-1)(8x-7),\\ a=\\frac14,\\ b=\\frac34$$</p>\n" } ]
Question: Is there any continuous function that satisfies the following conditions? Answer 1: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 2: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 3: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 4: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 5: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 6: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 7: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 8: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 9: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 10: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 11: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 12: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 13: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 14: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 15: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 16: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 17: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 18: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 19: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 20: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 21: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 22: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 23: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 24: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 25: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 26: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 27: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 28: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 29: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 30: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 31: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 32: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 33: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 34: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p> Answer 35: <p>What you want is a function that is highly peaked around $x=2a$. And you need to enforce the conditions that $f(0)=f(1)=0$; for that you can put in a factor of $x(1-x)$. </p> <p>So try</p> <p>$$ f(x) = x(1-x)e^{-20(x-\frac12)^2} $$ with $a=\frac14$ and $b=\frac34$. </p> <p>There, $$f(a)^2 - f(2a)f(b) + f(b)^2 \approx -0.00766 &lt;0$$</p> Answer 36: <p>All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example:</p> <p>$$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$</p>
Limit of an indicator function?
2,103,049
9
https://math.stackexchange.com/questions/2103049/limit-of-an-indicator-function
<p>I'm having some problem in the computation of limits of indicator functions. Why the following limit takes values $0$? $$\lim_{n\to\infty}1_{[n,n+1]}$$ Which are the steps I should follow to compute every kind of limit of such functions? I started thinking about applying the definition of limit as the x-values go to infinity, but at some point I get really confused because of the set of the indicator function... Does it work like the domain for other functions? How does this work for limsup and liminf of an indicator function? Thank you! </p>
[ { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" }, { "answer_id": 2103059, "score": 2, "text": "<p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\\to\\infty$, the interval which supports $1_{[n,n+1]}$ will \"fly off\" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p>\n" }, { "answer_id": 2702169, "score": 2, "text": "<p>Our conjecture is that it approaches zero. Why? The thing is <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for all but only a few <span class=\"math-container\">$n$</span>'s. Consider the following:</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[n,n+1]}(5) = 0 \\ \\text{unless} \\ n=4,5$$</span></p>\n<p>Also,</p>\n<p><span class=\"math-container\">$$1_{[3,3+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[4,4+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 1$$</span></p>\n<p><span class=\"math-container\">$$1_{[6,6+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$1_{[7,7+1]}(5.5) = 0$$</span></p>\n<p><span class=\"math-container\">$$\\vdots$$</span></p>\n<p><span class=\"math-container\">$$1_{[5,5+1]}(5.5) = 0 \\ \\text{unless} \\ n=5$$</span></p>\n<p>Thus, we must (try to) show that for any <span class=\"math-container\">$x$</span>,</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, \\exists N \\in \\mathbb N : |1_{[n,n+1]}-0| &lt; \\varepsilon \\leftarrow n \\ge N$$</span></p>\n<p>For <span class=\"math-container\">$x&lt;1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class=\"math-container\">$N$</span>.</p>\n<p>For <span class=\"math-container\">$x\\ge1$</span>, <span class=\"math-container\">$1_{[n,n+1]} = 0$</span> for <span class=\"math-container\">$n \\ge N=\\text{ceiling}(x)$</span>.</p>\n<p>Actually the following would be true as well:</p>\n<p><span class=\"math-container\">$$\\forall \\varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \\varepsilon \\ \\forall \\ \\text{but finitely many} \\ n \\ge 1$$</span></p>\n<hr />\n<p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p>\n<p>Observe that</p>\n<p><span class=\"math-container\">$$\\limsup_n [n,n+1] := \\bigcap_{m=1}^{\\infty} \\bigcup_{n=m}^{\\infty} [n,n+1] = \\bigcap_{m=1}^{\\infty} [m,m+1] \\cup [m+1,m+2] \\cup \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcap_{m=1}^{\\infty} [m, \\infty)$$</span></p>\n<p><span class=\"math-container\">$$ = [1,\\infty) \\cap [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = [2,\\infty) \\cap [3,\\infty) \\cap \\cdots = \\emptyset$$</span></p>\n<p>Now <span class=\"math-container\">$\\liminf_n [n,n+1] \\subseteq \\limsup_n [n,n+1]$</span> where</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] := \\bigcup_{m=1}^{\\infty} \\bigcap_{n=m}^{\\infty} [n,n+1]$$</span></p>\n<p>We could conclude <span class=\"math-container\">$\\liminf_n [n,n+1] = \\emptyset$</span> or simply compute explicitly:</p>\n<p><span class=\"math-container\">$$\\liminf_n [n,n+1] = \\bigcup_{m=1}^{\\infty} [m,m+1] \\cap [m+1, m+2] \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\{m+1\\} \\cap [m+2, m+3] \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset \\cap \\cdots$$</span></p>\n<p><span class=\"math-container\">$$ = \\bigcup_{m=1}^{\\infty} \\emptyset = \\emptyset$$</span></p>\n<p>Now if <span class=\"math-container\">$$\\liminf = \\limsup$$</span>, which is the case, define <span class=\"math-container\">$$\\lim := \\liminf := \\limsup$$</span></p>\n<p>Now there's a <a href=\"https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035\">folklore</a> that goes:</p>\n<blockquote>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{A_n} = 1_{\\lim A_n}$$</span></p>\n</blockquote>\n<p>Thus</p>\n<p><span class=\"math-container\">$$\\to \\lim_n 1_{[n,n+1]} = 1_{\\lim [n,n+1] = \\emptyset} = 0$$</span></p>\n" }, { "answer_id": 2103053, "score": 0, "text": "<p>Take $x\\in\\mathbb{R}$ arbitrarily and choose $x&lt;N\\in\\mathbb{N}$ such that $x\\notin [N,N+1]$ then</p>\n\n<p>$\\chi_{[n,n+1]}(x)=0$ for all $n\\geq N$</p>\n\n<p>and thus $\\chi_{[n,n+1]}(x)\\rightarrow 0$ for $n\\rightarrow \\infty$ pointwise.</p>\n\n<p>But $\\sup_{x\\in\\mathbb{R}}|\\chi_n(x)-0|=1$ for all $n\\in\\mathbb{N}$. So there is no convergence in the $\\sup$-norm.</p>\n" } ]
Question: Limit of an indicator function? Answer 1: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 2: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 3: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 4: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 5: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 6: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 7: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 8: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 9: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 10: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 11: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 12: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 13: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 14: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 15: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 16: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 17: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 18: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 19: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 20: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 21: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 22: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 23: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 24: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 25: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 26: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 27: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 28: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 29: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 30: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 31: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 32: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 33: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 34: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 35: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 36: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 37: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 38: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 39: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 40: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 41: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 42: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 43: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 44: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 45: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 46: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 47: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 48: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 49: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 50: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 51: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p> Answer 52: <p>Answering your first question: For a given $x$, there are at most two values of $n$ for which $1_{[n,n+1]}$ is nonzero. As $n\to\infty$, the interval which supports $1_{[n,n+1]}$ will "fly off" to the right, passing any given $x$. So in the tail of the sequence, $1_{[n,n+1]}(x)$ is eventually zero for any give $x$.</p> Answer 53: <p>Our conjecture is that it approaches zero. Why? The thing is <span class="math-container">$1_{[n,n+1]} = 0$</span> for all but only a few <span class="math-container">$n$</span>'s. Consider the following:</p> <p><span class="math-container">$$1_{[3,3+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[n,n+1]}(5) = 0 \ \text{unless} \ n=4,5$$</span></p> <p>Also,</p> <p><span class="math-container">$$1_{[3,3+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[4,4+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 1$$</span></p> <p><span class="math-container">$$1_{[6,6+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$1_{[7,7+1]}(5.5) = 0$$</span></p> <p><span class="math-container">$$\vdots$$</span></p> <p><span class="math-container">$$1_{[5,5+1]}(5.5) = 0 \ \text{unless} \ n=5$$</span></p> <p>Thus, we must (try to) show that for any <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, \exists N \in \mathbb N : |1_{[n,n+1]}-0| &lt; \varepsilon \leftarrow n \ge N$$</span></p> <p>For <span class="math-container">$x&lt;1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span>, so we can pick any <span class="math-container">$N$</span>.</p> <p>For <span class="math-container">$x\ge1$</span>, <span class="math-container">$1_{[n,n+1]} = 0$</span> for <span class="math-container">$n \ge N=\text{ceiling}(x)$</span>.</p> <p>Actually the following would be true as well:</p> <p><span class="math-container">$$\forall \varepsilon &gt; 0, |1_{[n,n+1]}-0| &lt; \varepsilon \ \forall \ \text{but finitely many} \ n \ge 1$$</span></p> <hr /> <p>This may be advanced but since you know liminf and limsup of numbers, I guess you could understand liminf and limsup of sets.</p> <p>Observe that</p> <p><span class="math-container">$$\limsup_n [n,n+1] := \bigcap_{m=1}^{\infty} \bigcup_{n=m}^{\infty} [n,n+1] = \bigcap_{m=1}^{\infty} [m,m+1] \cup [m+1,m+2] \cup \cdots$$</span></p> <p><span class="math-container">$$ = \bigcap_{m=1}^{\infty} [m, \infty)$$</span></p> <p><span class="math-container">$$ = [1,\infty) \cap [2,\infty) \cap [3,\infty) \cap \cdots = [2,\infty) \cap [3,\infty) \cap \cdots = \emptyset$$</span></p> <p>Now <span class="math-container">$\liminf_n [n,n+1] \subseteq \limsup_n [n,n+1]$</span> where</p> <p><span class="math-container">$$\liminf_n [n,n+1] := \bigcup_{m=1}^{\infty} \bigcap_{n=m}^{\infty} [n,n+1]$$</span></p> <p>We could conclude <span class="math-container">$\liminf_n [n,n+1] = \emptyset$</span> or simply compute explicitly:</p> <p><span class="math-container">$$\liminf_n [n,n+1] = \bigcup_{m=1}^{\infty} [m,m+1] \cap [m+1, m+2] \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \{m+1\} \cap [m+2, m+3] \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset \cap \cdots$$</span></p> <p><span class="math-container">$$ = \bigcup_{m=1}^{\infty} \emptyset = \emptyset$$</span></p> <p>Now if <span class="math-container">$$\liminf = \limsup$$</span>, which is the case, define <span class="math-container">$$\lim := \liminf := \limsup$$</span></p> <p>Now there's a <a href="https://academia.stackexchange.com/questions/116019/inadmissible-theorems-in-research/116035#116035">folklore</a> that goes:</p> <blockquote> <p><span class="math-container">$$\to \lim_n 1_{A_n} = 1_{\lim A_n}$$</span></p> </blockquote> <p>Thus</p> <p><span class="math-container">$$\to \lim_n 1_{[n,n+1]} = 1_{\lim [n,n+1] = \emptyset} = 0$$</span></p> Answer 54: <p>Take $x\in\mathbb{R}$ arbitrarily and choose $x&lt;N\in\mathbb{N}$ such that $x\notin [N,N+1]$ then</p> <p>$\chi_{[n,n+1]}(x)=0$ for all $n\geq N$</p> <p>and thus $\chi_{[n,n+1]}(x)\rightarrow 0$ for $n\rightarrow \infty$ pointwise.</p> <p>But $\sup_{x\in\mathbb{R}}|\chi_n(x)-0|=1$ for all $n\in\mathbb{N}$. So there is no convergence in the $\sup$-norm.</p>
Number of prime order elements of two non-isomorphic groups of same order
2,103,383
9
https://math.stackexchange.com/questions/2103383/number-of-prime-order-elements-of-two-non-isomorphic-groups-of-same-order
<p>For a given finite group $G$, let $\pi (G)$ denote the set of all prime divisors of $\vert G\vert$.</p> <p>Let $G_1$, $G_2$ be non-isomorphic finite groups of same order. Then obviously $\pi(G_1)=\pi(G_2)$.</p> <p>Suppose $G_1$, $G_2$ have the same number of elements of each non-prime order.</p> <p>Then is it true that $G_1$, $G_2$ have the same number of elements of order $p$, for each $p \in \pi(G_1)$ ?</p> <p>I guess that it is true; but I couldn't prove it. Can any one give an example for which this is not true ?</p>
[ { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" }, { "answer_id": 2108938, "score": 4, "text": "<p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p>\n\n<p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p>\n\n<p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p>\n\n<p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p>\n\n<ul>\n<li><strong>Type $T$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$(\\mathbb Z/3\\mathbb Z)^\\Gamma\\times \\{\\pm 1\\}$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(hk^a,ab)$$</p>\n\n<p>where $(\\mathbb Z/3\\mathbb Z)^\\Gamma$ is the set of maps from a given set $\\Gamma$ (of cardinality $n$) to $\\mathbb Z/3\\mathbb Z$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\\mathfrak S_3$.</p>\n\n<ul>\n<li><strong>Type $S$.</strong> The group is isomorphic to the set</li>\n</ul>\n\n<p>$$\\left((\\mathbb Z/2\\mathbb Z)^2\\right)^\\Gamma\\times \\mathbb Z/3\\mathbb Z$$</p>\n\n<p>with the product:</p>\n\n<p>$$(h,a)\\cdot (k,b)=(h\\cdot\\alpha^a(k),a+b)$$</p>\n\n<p>where we think of $\\mathbb Z/3\\mathbb 2$ as $\\{0,1,2\\}$ under addition modulo $3$, and $\\alpha$ is a cyclic permutation of three nonidentity elements of $(\\mathbb Z/2\\mathbb Z)^2$.</p>\n\n<p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\\mathfrak A_4$.</p>\n\n<hr>\n\n<p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p>\n\n<p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p>\n\n<p>I wish someone find a better result.</p>\n\n<p><em>Sources : <a href=\"http://jlms.oxfordjournals.org/content/s1-12/3/195.extract\" rel=\"nofollow noreferrer\">one article</a>, <a href=\"http://www.cs.umb.edu/~eb/order2or3.pdf\" rel=\"nofollow noreferrer\">another article</a> and <a href=\"http://www.sciencedirect.com/science/article/pii/0021869391902719\" rel=\"nofollow noreferrer\">a final article</a>.</em></p>\n" }, { "answer_id": 4236511, "score": 3, "text": "<p>There are no examples up to order <span class=\"math-container\">$700$</span>, by a quick search in Magma.</p>\n<p>There are also no examples with only elements of prime order (or <span class=\"math-container\">$1$</span>). This follows from a result <a href=\"https://www.jstor.org/stable/2159554\" rel=\"nofollow noreferrer\">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p>\n<ol>\n<li>A <span class=\"math-container\">$p$</span>-group of exponent <span class=\"math-container\">$p$</span>;</li>\n<li>The alternating group <span class=\"math-container\">$A_5$</span>;</li>\n<li>A Frobenius group <span class=\"math-container\">$P\\rtimes C_q$</span> for some <span class=\"math-container\">$p$</span>-group <span class=\"math-container\">$P$</span> of exponent <span class=\"math-container\">$p$</span> and some prime <span class=\"math-container\">$q\\neq p$</span>.</li>\n</ol>\n<p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p>\n<p>The general problem seems difficult.</p>\n" } ]
Question: Number of prime order elements of two non-isomorphic groups of same order Answer 1: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 2: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 3: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 4: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 5: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 6: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 7: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 8: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 9: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 10: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 11: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 12: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 13: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 14: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 15: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 16: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 17: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 18: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 19: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 20: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 21: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 22: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 23: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 24: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 25: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 26: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 27: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 28: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 29: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 30: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 31: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 32: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 33: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 34: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p> Answer 35: <p><em>Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.</em></p> <p><em>I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.</em></p> <p>I tried to find two groups where they would not be any elements of order $&gt;3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.</p> <p>But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):</p> <ul> <li><strong>Type $T$.</strong> The group is isomorphic to the set</li> </ul> <p>$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(hk^a,ab)$$</p> <p>where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.</p> <p>*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.</p> <ul> <li><strong>Type $S$.</strong> The group is isomorphic to the set</li> </ul> <p>$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$</p> <p>with the product:</p> <p>$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$</p> <p>where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.</p> <p>*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.</p> <hr> <p>We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.</p> <p><strong>Conclusion:</strong> there is no counter-example with of group with only elements of order $2$ and $3$.</p> <p>I wish someone find a better result.</p> <p><em>Sources : <a href="http://jlms.oxfordjournals.org/content/s1-12/3/195.extract" rel="nofollow noreferrer">one article</a>, <a href="http://www.cs.umb.edu/~eb/order2or3.pdf" rel="nofollow noreferrer">another article</a> and <a href="http://www.sciencedirect.com/science/article/pii/0021869391902719" rel="nofollow noreferrer">a final article</a>.</em></p> Answer 36: <p>There are no examples up to order <span class="math-container">$700$</span>, by a quick search in Magma.</p> <p>There are also no examples with only elements of prime order (or <span class="math-container">$1$</span>). This follows from a result <a href="https://www.jstor.org/stable/2159554" rel="nofollow noreferrer">from a paper of Deaconescu et al</a>, which classifies such finite groups. They are one of the following:</p> <ol> <li>A <span class="math-container">$p$</span>-group of exponent <span class="math-container">$p$</span>;</li> <li>The alternating group <span class="math-container">$A_5$</span>;</li> <li>A Frobenius group <span class="math-container">$P\rtimes C_q$</span> for some <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> of exponent <span class="math-container">$p$</span> and some prime <span class="math-container">$q\neq p$</span>.</li> </ol> <p>In all three cases the number of elements of each order can be deduced purely from the group order, and of course there is no overlap of orders between the three categories.</p> <p>The general problem seems difficult.</p>
Find the limit $\lim _{ x \to 0} \frac{\sqrt[3]{1+6x+3x^2+3x^3+3x^4}-\sqrt[4]{1+8x+4x^2+4x^3-2x^4}}{6x^2}$
2,104,468
9
https://math.stackexchange.com/questions/2104468/find-the-limit-lim-x-to-0-frac-sqrt316x3x23x33x4-sqrt41
<p>I have been trying to find the limit, $$\lim _{ x \to 0} \frac{\sqrt[3]{1+6x+3x^2+3x^3+3x^4}-\sqrt[4]{1+8x+4x^2+4x^3-2x^4}}{6x^2}$$</p> <p>and sort of succeeded. But my $0$ answer doesn't converge with what Wolfram says which is $1/3$. Therefore, I would really appreciate it, if you could give me the right answer and drop a hint on the solution. </p>
[ { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" }, { "answer_id": 2104506, "score": 6, "text": "<p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \\cdots + v^{11}$$ to get $$\\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \\to 0$ the variable $t \\to 12$ and hence the desired limit is $$\\frac{1}{72}\\lim_{x\\to 0}\\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p>\n\n<hr>\n\n<p>On OP's request the solution by Taylor series is provided below.</p>\n\n<p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \\frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p>\n" }, { "answer_id": 2104503, "score": 3, "text": "<p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p>\n\n<p>Use the identity </p>\n\n<p>$$a^{12}-b^{12}=\\\\\n(a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p>\n\n<p>By multiplying/dividing by the conjugate \"dodecanomial\", the numerator becomes the polynomial</p>\n\n<p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\\\\n(1+24x+228x^2+\\cdots81x^{16})-(1+24x+204x^2+\\cdots-8x^{12}),$$</p>\n\n<p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p>\n\n<p>$$\\frac{228-204}{6\\cdot12}=\\frac13.$$</p>\n\n<hr>\n\n<p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p>\n\n<p>$$\\frac{\\sqrt[3]{1+\\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\\sqrt[4]{1+\\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p>\n\n<p>It is not hard to show that the Taylor development will yield</p>\n\n<p>$$\\frac{(1-\\frac93x^2+\\cdots)-(1-\\frac{20}4x^2+\\cdots)}{6x^2}.$$</p>\n" }, { "answer_id": 2104486, "score": 1, "text": "<p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>\n" } ]
Question: Find the limit $\lim _{ x \to 0} \frac{\sqrt[3]{1+6x+3x^2+3x^3+3x^4}-\sqrt[4]{1+8x+4x^2+4x^3-2x^4}}{6x^2}$ Answer 1: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 2: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 3: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 4: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 5: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 6: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 7: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 8: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 9: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 10: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 11: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 12: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 13: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 14: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 15: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 16: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 17: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 18: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 19: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 20: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 21: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 22: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 23: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 24: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 25: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 26: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 27: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 28: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 29: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 30: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 31: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 32: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 33: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 34: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 35: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 36: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 37: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 38: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 39: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 40: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 41: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 42: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 43: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 44: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 45: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 46: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 47: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 48: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 49: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 50: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 51: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p> Answer 52: <p>Another approach (apart from the one mentioned in my comments) is to use Binomial Theorem. Let the expression be denoted by $$\frac{u - v}{w}$$ and then multiply numerator and denominator by $$t = u^{11} + u^{10}v + \cdots + v^{11}$$ to get $$\frac{u^{12} - v^{12}}{wt}$$ and note that since $u, v$ both tend to $1$ as $x \to 0$ the variable $t \to 12$ and hence the desired limit is $$\frac{1}{72}\lim_{x\to 0}\frac{(1+6x+3x^2+3x^3+3x^4)^{4} - (1+8x+4x^2+4x^3-2x^4)^{3}}{x^{2}}$$ As can be seen the numerator can be written as $$(1 + 24x + 228x^{2} + o(x^{2})) - (1 + 24x + 204x^{2} + o(x^{2}))$$ or $$24x^{2} + o(x^{2})$$ and hence the answer is $24/72 = 1/3$.</p> <hr> <p>On OP's request the solution by Taylor series is provided below.</p> <p>We have the Taylor series for $(1 + x)^{n}$ for all real $n$ as $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + o(x^{2})$$ as $x\to 0$. Hence $$(1+6x+3x^2+3x^3+3x^4)^{1/3} = 1 + 2x - 3x^{2} + o(x^{2})$$ and $$(1+8x+4x^2+4x^3-2x^4)^{1/4} = 1 + 2x - 5x^{2} + o(x^{2})$$ so the answer now comes easily as $1/3$.</p> Answer 53: <p><strong>Without Taylor nor the generalized Binomial theorem</strong>:</p> <p>Use the identity </p> <p>$$a^{12}-b^{12}=\\ (a-b)(a^{11}+a^{10}b^{1}+a^{9}b^{2}+a^{8}b^{3}+a^{7}b^{4}+a^{6}b^{5}+a^{5}b^{6}+a^{4}b^{7}+a^{3}b^{8}+a^{2}b^{9}+ab^{10}+b^{11}).$$</p> <p>By multiplying/dividing by the conjugate "dodecanomial", the numerator becomes the polynomial</p> <p>$$(1+6x+3x^2+3x^3+3x^4)^4-(1+8x+4x^2+4x^3-2x^4)^3=\\ (1+24x+228x^2+\cdots81x^{16})-(1+24x+204x^2+\cdots-8x^{12}),$$</p> <p>while the denominator is a sum of $12$ rational expressions that tend to $1$. So after simplification, the limit is</p> <p>$$\frac{228-204}{6\cdot12}=\frac13.$$</p> <hr> <p>Alternatively, you can pull a factor $(1+2x)$ from both radicals and get</p> <p>$$\frac{\sqrt[3]{1+\frac{-9x^2-5x^3+3x^4}{(1+2x)^3}}-\sqrt[4]{1+\frac{-20x^2-28x^3-18x^4}{(1+2x)^4}}}{6x^2}.$$</p> <p>It is not hard to show that the Taylor development will yield</p> <p>$$\frac{(1-\frac93x^2+\cdots)-(1-\frac{20}4x^2+\cdots)}{6x^2}.$$</p> Answer 54: <p>First, do a Binomial expansion of the cube root expressions, simplify the numerator, cancel $x^2$ terms and then conduct the limit. You should get $1/3$.</p>
Is $7^4-1=2^5 \times 3 \times 5^2$ the only counterexample?
2,104,566
9
https://math.stackexchange.com/questions/2104566/is-74-1-25-times-3-times-52-the-only-counterexample
<p>My friend once told me his conjecture:</p> <blockquote> <p>For every prime $p$ and every integer $n$ which is greater than $2$, $p^n-1$ must have prime factor greater than $p$.</p> </blockquote> <p>A few minutes later I found a counterexample:</p> <blockquote> <p>$7^4-1=2^5 \times 3 \times 5^2$</p> </blockquote> <p>However, is there any other counterexample? Could you help me?</p>
[ { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" }, { "answer_id": 2104671, "score": 5, "text": "<p>Searching for the smallest counterexample for each power gives the following:</p>\n\n<p>$$\\begin{gather*}\n67^3-1=300762=2\\times3\\times7^2\\times11\\times31\\\\\n7^4-1=2400=2^5\\times3\\times5^2\\\\\n7307^5-1=20830300574702353306=2\\times11\\times13\\times151\\times191\\times281\\times911\\times1481\\times6661\\\\\n263^6-1=330928743953808=2^4\\times3^2\\times7^2\\times11\\times13\\times103\\times109\\times131\\times223\\\\\n493397^7-1=7128392726072389152475777731457041578312=2^2\\times23\\times29^2\\times31\\times127\\times173\\times1163\\times2129\\times4229\\times26041\\times50177\\times71359\\times138349\\\\\n10181^8-1=115431276825545828982660756380640=2^5\\times3\\times5\\times17\\times37\\times509\\times677\\times1657\\times1697\\times2069\\times4657\\times5113\\times8009\\\\\n734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\\times3\\times11\\times17\\times19\\times41\\times59\\times61\\times71\\times131\\times139\\times191\\times761\\times2161\\times9871\\times10301\\times12391\\times14341\\times27901\\times181711\\times699511\\\\\n\\end{gather*}$$</p>\n" }, { "answer_id": 2104590, "score": 1, "text": "<p>matlab code:</p>\n\n<pre><code>function pt(m, k)\n% test all up to k prime, all powers from 2 to m. \n% fuind factors of p^n - 1, and see whether all are less than p\n\nps = primes(k);\n\nfor p=ps\n disp(p)\n for n = 3:m\n f = factor(p^n - 1);\n b = (f &lt;= p);\n if (all(b))\n 'found'\n disp(p)\n disp(n)\n disp(factor(p^n - 1))\n end\n end\nend\n</code></pre>\n\n<p>Results (abbreviated):</p>\n\n<pre><code>&gt; pt(7, 500)\n 7 4\n 41 4\n 43 4\n 47 4\n 67 3\n 73 4\n 79 3\n 83 4\n 137 3\n 149 3\n 157 4\n 163 3\n 173 4\n 181 3\n 191 3\n 191 4\n</code></pre>\n\n<p>Notice that I only tested up to 7th powers. \"191\" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>\n" } ]
Question: Is $7^4-1=2^5 \times 3 \times 5^2$ the only counterexample? Answer 1: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 2: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 3: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 4: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 5: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 6: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 7: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 8: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 9: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 10: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 11: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 12: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 13: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 14: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 15: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 16: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 17: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 18: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 19: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 20: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 21: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 22: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 23: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 24: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 25: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 26: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 27: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 28: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 29: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 30: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 31: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 32: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 33: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 34: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p> Answer 35: <p>Searching for the smallest counterexample for each power gives the following:</p> <p>$$\begin{gather*} 67^3-1=300762=2\times3\times7^2\times11\times31\\ 7^4-1=2400=2^5\times3\times5^2\\ 7307^5-1=20830300574702353306=2\times11\times13\times151\times191\times281\times911\times1481\times6661\\ 263^6-1=330928743953808=2^4\times3^2\times7^2\times11\times13\times103\times109\times131\times223\\ 493397^7-1=7128392726072389152475777731457041578312=2^2\times23\times29^2\times31\times127\times173\times1163\times2129\times4229\times26041\times50177\times71359\times138349\\ 10181^8-1=115431276825545828982660756380640=2^5\times3\times5\times17\times37\times509\times677\times1657\times1697\times2069\times4657\times5113\times8009\\ 734197^{10}-1=45512014174960703207280356067658088285424423446151729273048=2^3\times3\times11\times17\times19\times41\times59\times61\times71\times131\times139\times191\times761\times2161\times9871\times10301\times12391\times14341\times27901\times181711\times699511\\ \end{gather*}$$</p> Answer 36: <p>matlab code:</p> <pre><code>function pt(m, k) % test all up to k prime, all powers from 2 to m. % fuind factors of p^n - 1, and see whether all are less than p ps = primes(k); for p=ps disp(p) for n = 3:m f = factor(p^n - 1); b = (f &lt;= p); if (all(b)) 'found' disp(p) disp(n) disp(factor(p^n - 1)) end end end </code></pre> <p>Results (abbreviated):</p> <pre><code>&gt; pt(7, 500) 7 4 41 4 43 4 47 4 67 3 73 4 79 3 83 4 137 3 149 3 157 4 163 3 173 4 181 3 191 3 191 4 </code></pre> <p>Notice that I only tested up to 7th powers. "191" is an interesting failure case, because it fails twice. Also note that this list provides examples with $n = 3$ as well as $n = 4$, so $n = 4$ isn't the only possibility. </p>
Hilbert Scheme of Points of Riemann Sphere
2,104,832
9
https://math.stackexchange.com/questions/2104832/hilbert-scheme-of-points-of-riemann-sphere
<p>I'm looking for a comprehensive note/paper/chapter of a book which discusses the <em>Hilbert Scheme of Points of Riemann sphere ($\mathbb{P}_{\mathbb{C}}^1$)</em> (maybe via a less abstract, more constructive approach?) </p> <p>For the case of $\mathbb{C}^2$ the Hilbert scheme of $n$ points, $\mathrm{Hilb}^n(\mathbb{C}^2)$, turns out to be smooth and symplectic. I'm wondering if $\mathrm{Hilb}^n(\mathbb{P}^1_{\mathbb{C}})$ also shares those nice properties. More generally what does $\mathrm{Hilb}^n(\mathbb{P}^1_{\mathbb{C}})$ look like as a scheme? I'm specially interested in knowing about morphisms $\mathrm{Hilb}^n(\mathbb{P}_\mathbb{C}^1)\to \mathbb{A}^1_\mathbb{C}$. Or maybe more generally rational functions instead of functions?</p> <p>I'm a rookie in algebraic geometry so I'm hoping that a simple special case like Riemann sphere, can be treated more directly, rather than the usual abstract approach (which I still have trouble understanding). At the end of the day understanding $\mathrm{Hilb}^n(\mathbb{P}_{\mathbb{C}}^1)$ is all I need for pushing my theoretical physics research further.</p>
[ { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" }, { "answer_id": 2104960, "score": 10, "text": "<p>The Hilbert scheme of $n$ points on $\\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\\mathbb P^1$, and we will see that it is isomorphic to $\\mathbb P^n$.<br>\nTo understand this concretely it is best to interpret $\\mathbb P^n$ as the projective space associated to the vector space $\\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br>\nIn other words, the polynomial $a_0x^n+a_1x^{n-1}y+\\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\\cdots:a_n]\\in \\mathbb P^n$<br>\nThe required isomorphism then comes from the morphism $F:(\\mathbb P^1)^n\\to \\mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\\Pi_{i=1}^n (v_i x-u_iy)\\in \\mathbb C[x,y]_n$.<br>\nDividing out by the action of the symmetric group $S_n$ on $(\\mathbb P^1)^n$ we get the required isomorphism $$f:\\operatorname {Hilb}^n(\\mathbb P^1)=(\\mathbb P^1)^n/S_n=\\operatorname {Sym}^n(\\mathbb P^1)\\stackrel {\\cong}{\\to} \\mathbb P^n$$</p>\n" }, { "answer_id": 2104861, "score": 8, "text": "<p>$Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}^n$.</p>\n\n<p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \\hookrightarrow \\mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \\cong \\mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\\mathcal{O}(-n),\\mathcal{O}) = H^0(\\mathbb{P}^1,\\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\\mathbb{P}^1) \\cong \\mathbb{P}(k^{n+1}) \\cong \\mathbb{P}^n$.</p>\n" }, { "answer_id": 2105868, "score": 4, "text": "<p>Here is another, extremely geometric, vision of the Hilbert scheme $\\mathcal H=\\mathcal H^{[n]}$ of $n$ points in the projective line $\\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\\subset \\mathbb P^1$ with Hilbert polynomial the constant polynomial $n\\in \\mathbb Q[T]$.<br>\nLike all hypersurfaces of any projective space, each $S=S_a\\subset \\mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \\mathcal H=V(f(a;x,y)) \\subset \\mathbb P^n_a\\times \\mathbb P^1_{x:y} $$ endowed with its projection $$p:\\mathcal H\\to \\mathbb P^n_a:([a_0:a_1:\\cdots:a_n],[x:y])\\mapsto [a_0:a_1:\\cdots:a_n]$$ so that $p^{-1}(a)=S_a\\subset \\mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n=0$.<br>\nThe Hilbert scheme $\\mathcal H$ is smooth and connected but its fibres over $\\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\\in \\mathbb P^n$ such that the discriminant $$\\operatorname {discr} \\: (a_0x^n+a_{n-1}x^{n-1}y+\\cdots+a_ny^n)\\in \\mathbb C[a_0,\\cdots,a_n]$$ is zero.<br>\nThis is in line with the fact that $\\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\\mathcal H\\to \\mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p>\n\n<p><strong>Remarks</strong><br>\n1) The construction shows that $\\mathcal H$ is projective so that all morphisms to $\\mathbb A^1_\\mathbb C$ are constant, which answers one of Hamed's questions.<br>\n2) The connectednes of $\\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br>\n3) The smoothness of $\\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br>\nThese calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\\mathbb P^N$.<br>\nUnfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>\n" } ]
Question: Hilbert Scheme of Points of Riemann Sphere Answer 1: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 2: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 3: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 4: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 5: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 6: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 7: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 8: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 9: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 10: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 11: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 12: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 13: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 14: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 15: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 16: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 17: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 18: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 19: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 20: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 21: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 22: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 23: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 24: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 25: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 26: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 27: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 28: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 29: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 30: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 31: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 32: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 33: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 34: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 35: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 36: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 37: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 38: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 39: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 40: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 41: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 42: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 43: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 44: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 45: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 46: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 47: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 48: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 49: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 50: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 51: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p> Answer 52: <p>The Hilbert scheme of $n$ points on $\mathbb P^1$ is the space of unordered families of $n$ non-necessarily distinct points of $\mathbb P^1$, and we will see that it is isomorphic to $\mathbb P^n$.<br> To understand this concretely it is best to interpret $\mathbb P^n$ as the projective space associated to the vector space $\mathbb C[x,y]_n$ of homogeneous polynomials of degree $n$ in two variables .<br> In other words, the polynomial $a_0x^n+a_1x^{n-1}y+\cdots+a_ny^n$ is seen as the point $[a_0:a_1:\cdots:a_n]\in \mathbb P^n$<br> The required isomorphism then comes from the morphism $F:(\mathbb P^1)^n\to \mathbb P^n$ sending an $n$-tuple $([u_i:v_i])_{i=1}^n$ to the point corresponding to the polynomial $\Pi_{i=1}^n (v_i x-u_iy)\in \mathbb C[x,y]_n$.<br> Dividing out by the action of the symmetric group $S_n$ on $(\mathbb P^1)^n$ we get the required isomorphism $$f:\operatorname {Hilb}^n(\mathbb P^1)=(\mathbb P^1)^n/S_n=\operatorname {Sym}^n(\mathbb P^1)\stackrel {\cong}{\to} \mathbb P^n$$</p> Answer 53: <p>$Hilb^n(\mathbb{P}^1) \cong \mathbb{P}^n$.</p> <p>EDIT (an explanation). A subscheme $Z$ of length $n$ is determined by its ideal $I_Z$ and its embedding $I_Z \hookrightarrow \mathcal{O}$. Every ideal on a smooth curve is invertible, hence $I_Z \cong \mathcal{O}(-n)$, and its embedding is determined by an element of $Hom(\mathcal{O}(-n),\mathcal{O}) = H^0(\mathbb{P}^1,\mathcal{O}(n)) = k^{n+1}$. Thus $Hilb^n(\mathbb{P}^1) \cong \mathbb{P}(k^{n+1}) \cong \mathbb{P}^n$.</p> Answer 54: <p>Here is another, extremely geometric, vision of the Hilbert scheme $\mathcal H=\mathcal H^{[n]}$ of $n$ points in the projective line $\mathbb P^1_{x:y}$, in other words the Hilbert scheme of the subschemes $S\subset \mathbb P^1$ with Hilbert polynomial the constant polynomial $n\in \mathbb Q[T]$.<br> Like all hypersurfaces of any projective space, each $S=S_a\subset \mathbb P^1_{x:y}$ has an equation, namely $f(a;x,y)= a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$ and the Hilbert scheme is the hypersurface $$ \mathcal H=V(f(a;x,y)) \subset \mathbb P^n_a\times \mathbb P^1_{x:y} $$ endowed with its projection $$p:\mathcal H\to \mathbb P^n_a:([a_0:a_1:\cdots:a_n],[x:y])\mapsto [a_0:a_1:\cdots:a_n]$$ so that $p^{-1}(a)=S_a\subset \mathbb P^1$, the subscheme with equation $a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n=0$.<br> The Hilbert scheme $\mathcal H$ is smooth and connected but its fibres over $\mathbb P^n$ are not: the non-smooth fibres $S_a$ of $p$ correspond to those $a\in \mathbb P^n$ such that the discriminant $$\operatorname {discr} \: (a_0x^n+a_{n-1}x^{n-1}y+\cdots+a_ny^n)\in \mathbb C[a_0,\cdots,a_n]$$ is zero.<br> This is in line with the fact that $\mathbb P^n$ being simply connected cannot have an étale connected covering of degree two and thus $p:\mathcal H\to \mathbb P^n$ <em>must</em> have a non-empty ramification locus. </p> <p><strong>Remarks</strong><br> 1) The construction shows that $\mathcal H$ is projective so that all morphisms to $\mathbb A^1_\mathbb C$ are constant, which answers one of Hamed's questions.<br> 2) The connectednes of $\mathcal H$ is an extremely special case of Hartshorne's brilliant Ph.D: he proved that every Hilbert scheme (corresponding to some fixed Hilbert polunomial) is connected<br> 3) The smoothness of $\mathcal H$ is a low-tech calculation using only the implicit function theorem of advanced calculus.<br> These calculations can easily be generalized to the case of the Hilbert scheme of hypersurfaces of $\mathbb P^N$.<br> Unfortunately these calculations cannot be found in the literature (as far as I know), and they could be the theme of another question...</p>